00:00:00.001 Started by upstream project "autotest-spdk-v24.05-vs-dpdk-v23.11" build number 108 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3286 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.133 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.134 The recommended git tool is: git 00:00:00.134 using credential 00000000-0000-0000-0000-000000000002 00:00:00.136 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.162 Fetching changes from the remote Git repository 00:00:00.165 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.193 Using shallow fetch with depth 1 00:00:00.193 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.193 > git --version # timeout=10 00:00:00.217 > git --version # 'git version 2.39.2' 00:00:00.217 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.232 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.232 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.393 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.406 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.418 Checking out Revision 1c6ed56008363df82da0fcec030d6d5a1f7bd340 (FETCH_HEAD) 00:00:06.418 > git config core.sparsecheckout # timeout=10 00:00:06.477 > git read-tree -mu HEAD # timeout=10 00:00:06.495 > git checkout -f 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=5 00:00:06.528 Commit message: "spdk-abi-per-patch: pass revision to subbuild" 00:00:06.528 > git rev-list --no-walk 1c6ed56008363df82da0fcec030d6d5a1f7bd340 # timeout=10 00:00:06.633 [Pipeline] Start of Pipeline 00:00:06.646 [Pipeline] library 00:00:06.647 Loading library shm_lib@master 00:00:06.647 Library shm_lib@master is cached. Copying from home. 00:00:06.662 [Pipeline] node 00:00:06.666 Running on VM-host-WFP7 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.669 [Pipeline] { 00:00:06.678 [Pipeline] catchError 00:00:06.680 [Pipeline] { 00:00:06.689 [Pipeline] wrap 00:00:06.695 [Pipeline] { 00:00:06.700 [Pipeline] stage 00:00:06.702 [Pipeline] { (Prologue) 00:00:06.715 [Pipeline] echo 00:00:06.716 Node: VM-host-WFP7 00:00:06.720 [Pipeline] cleanWs 00:00:06.728 [WS-CLEANUP] Deleting project workspace... 00:00:06.728 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.733 [WS-CLEANUP] done 00:00:06.886 [Pipeline] setCustomBuildProperty 00:00:06.946 [Pipeline] httpRequest 00:00:06.960 [Pipeline] echo 00:00:06.962 Sorcerer 10.211.164.101 is alive 00:00:06.967 [Pipeline] httpRequest 00:00:06.971 HttpMethod: GET 00:00:06.971 URL: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:06.972 Sending request to url: http://10.211.164.101/packages/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:06.985 Response Code: HTTP/1.1 200 OK 00:00:06.985 Success: Status code 200 is in the accepted range: 200,404 00:00:06.985 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:26.038 [Pipeline] sh 00:00:26.322 + tar --no-same-owner -xf jbp_1c6ed56008363df82da0fcec030d6d5a1f7bd340.tar.gz 00:00:26.338 [Pipeline] httpRequest 00:00:26.371 [Pipeline] echo 00:00:26.372 Sorcerer 10.211.164.101 is alive 00:00:26.380 [Pipeline] httpRequest 00:00:26.384 HttpMethod: GET 00:00:26.385 URL: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:26.385 Sending request to url: http://10.211.164.101/packages/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:26.402 Response Code: HTTP/1.1 200 OK 00:00:26.403 Success: Status code 200 is in the accepted range: 200,404 00:00:26.404 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:50.660 [Pipeline] sh 00:00:50.936 + tar --no-same-owner -xf spdk_5fa2f5086d008303c3936a88b8ec036d6970b1e3.tar.gz 00:00:53.496 [Pipeline] sh 00:00:53.784 + git -C spdk log --oneline -n5 00:00:53.784 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:00:53.784 330a4f94d nvme: check pthread_mutex_destroy() return value 00:00:53.784 7b72c3ced nvme: add nvme_ctrlr_lock 00:00:53.784 fc7a37019 nvme: always use nvme_robust_mutex_lock for ctrlr_lock 00:00:53.784 3e04ecdd1 bdev_nvme: use spdk_nvme_ctrlr_fail() on ctrlr_loss_timeout 00:00:53.805 [Pipeline] withCredentials 00:00:53.816 > git --version # timeout=10 00:00:53.827 > git --version # 'git version 2.39.2' 00:00:53.844 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:53.846 [Pipeline] { 00:00:53.855 [Pipeline] retry 00:00:53.857 [Pipeline] { 00:00:53.874 [Pipeline] sh 00:00:54.154 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:55.547 [Pipeline] } 00:00:55.569 [Pipeline] // retry 00:00:55.573 [Pipeline] } 00:00:55.593 [Pipeline] // withCredentials 00:00:55.603 [Pipeline] httpRequest 00:00:55.621 [Pipeline] echo 00:00:55.622 Sorcerer 10.211.164.101 is alive 00:00:55.628 [Pipeline] httpRequest 00:00:55.650 HttpMethod: GET 00:00:55.650 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:55.651 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:55.653 Response Code: HTTP/1.1 200 OK 00:00:55.654 Success: Status code 200 is in the accepted range: 200,404 00:00:55.654 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:02.077 [Pipeline] sh 00:01:02.376 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:03.764 [Pipeline] sh 00:01:04.041 + git -C dpdk log --oneline -n5 00:01:04.041 eeb0605f11 version: 23.11.0 00:01:04.041 238778122a doc: update release notes for 23.11 00:01:04.041 46aa6b3cfc doc: fix description of RSS features 00:01:04.041 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:04.041 7e421ae345 devtools: support skipping forbid rule check 00:01:04.056 [Pipeline] writeFile 00:01:04.068 [Pipeline] sh 00:01:04.348 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:04.359 [Pipeline] sh 00:01:04.638 + cat autorun-spdk.conf 00:01:04.638 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.638 SPDK_TEST_NVME=1 00:01:04.638 SPDK_TEST_FTL=1 00:01:04.638 SPDK_TEST_ISAL=1 00:01:04.638 SPDK_RUN_ASAN=1 00:01:04.638 SPDK_RUN_UBSAN=1 00:01:04.638 SPDK_TEST_XNVME=1 00:01:04.638 SPDK_TEST_NVME_FDP=1 00:01:04.638 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:04.638 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:04.638 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:04.645 RUN_NIGHTLY=1 00:01:04.647 [Pipeline] } 00:01:04.663 [Pipeline] // stage 00:01:04.694 [Pipeline] stage 00:01:04.696 [Pipeline] { (Run VM) 00:01:04.709 [Pipeline] sh 00:01:04.993 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:04.993 + echo 'Start stage prepare_nvme.sh' 00:01:04.993 Start stage prepare_nvme.sh 00:01:04.993 + [[ -n 0 ]] 00:01:04.993 + disk_prefix=ex0 00:01:04.993 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:04.993 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:04.993 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:04.993 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.993 ++ SPDK_TEST_NVME=1 00:01:04.993 ++ SPDK_TEST_FTL=1 00:01:04.993 ++ SPDK_TEST_ISAL=1 00:01:04.993 ++ SPDK_RUN_ASAN=1 00:01:04.993 ++ SPDK_RUN_UBSAN=1 00:01:04.993 ++ SPDK_TEST_XNVME=1 00:01:04.993 ++ SPDK_TEST_NVME_FDP=1 00:01:04.993 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:04.993 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:04.993 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:04.993 ++ RUN_NIGHTLY=1 00:01:04.993 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:04.993 + nvme_files=() 00:01:04.993 + declare -A nvme_files 00:01:04.993 + backend_dir=/var/lib/libvirt/images/backends 00:01:04.993 + nvme_files['nvme.img']=5G 00:01:04.993 + nvme_files['nvme-cmb.img']=5G 00:01:04.993 + nvme_files['nvme-multi0.img']=4G 00:01:04.993 + nvme_files['nvme-multi1.img']=4G 00:01:04.993 + nvme_files['nvme-multi2.img']=4G 00:01:04.993 + nvme_files['nvme-openstack.img']=8G 00:01:04.993 + nvme_files['nvme-zns.img']=5G 00:01:04.993 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:04.993 + (( SPDK_TEST_FTL == 1 )) 00:01:04.993 + nvme_files["nvme-ftl.img"]=6G 00:01:04.993 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:04.993 + nvme_files["nvme-fdp.img"]=1G 00:01:04.993 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:04.993 + for nvme in "${!nvme_files[@]}" 00:01:04.993 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:04.993 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.993 + for nvme in "${!nvme_files[@]}" 00:01:04.993 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:01:04.993 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:04.993 + for nvme in "${!nvme_files[@]}" 00:01:04.993 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:04.993 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.993 + for nvme in "${!nvme_files[@]}" 00:01:04.993 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:04.993 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:04.993 + for nvme in "${!nvme_files[@]}" 00:01:04.993 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:04.993 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:04.993 + for nvme in "${!nvme_files[@]}" 00:01:04.993 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:04.993 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:04.993 + for nvme in "${!nvme_files[@]}" 00:01:04.993 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:05.252 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:05.252 + for nvme in "${!nvme_files[@]}" 00:01:05.252 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:01:05.252 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:05.252 + for nvme in "${!nvme_files[@]}" 00:01:05.252 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:05.252 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:05.252 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:05.252 + echo 'End stage prepare_nvme.sh' 00:01:05.252 End stage prepare_nvme.sh 00:01:05.264 [Pipeline] sh 00:01:05.548 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:05.548 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora38 00:01:05.548 00:01:05.548 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:05.548 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:05.548 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:05.548 HELP=0 00:01:05.548 DRY_RUN=0 00:01:05.548 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:01:05.548 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:05.548 NVME_AUTO_CREATE=0 00:01:05.548 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:01:05.548 NVME_CMB=,,,, 00:01:05.548 NVME_PMR=,,,, 00:01:05.548 NVME_ZNS=,,,, 00:01:05.548 NVME_MS=true,,,, 00:01:05.548 NVME_FDP=,,,on, 00:01:05.548 SPDK_VAGRANT_DISTRO=fedora38 00:01:05.548 SPDK_VAGRANT_VMCPU=10 00:01:05.548 SPDK_VAGRANT_VMRAM=12288 00:01:05.548 SPDK_VAGRANT_PROVIDER=libvirt 00:01:05.548 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:05.548 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:05.548 SPDK_OPENSTACK_NETWORK=0 00:01:05.548 VAGRANT_PACKAGE_BOX=0 00:01:05.548 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:05.548 FORCE_DISTRO=true 00:01:05.548 VAGRANT_BOX_VERSION= 00:01:05.548 EXTRA_VAGRANTFILES= 00:01:05.548 NIC_MODEL=virtio 00:01:05.548 00:01:05.548 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt' 00:01:05.548 /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:08.079 Bringing machine 'default' up with 'libvirt' provider... 00:01:08.338 ==> default: Creating image (snapshot of base box volume). 00:01:08.598 ==> default: Creating domain with the following settings... 00:01:08.598 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721562187_7c5e696591beb3859f19 00:01:08.598 ==> default: -- Domain type: kvm 00:01:08.598 ==> default: -- Cpus: 10 00:01:08.598 ==> default: -- Feature: acpi 00:01:08.598 ==> default: -- Feature: apic 00:01:08.598 ==> default: -- Feature: pae 00:01:08.598 ==> default: -- Memory: 12288M 00:01:08.598 ==> default: -- Memory Backing: hugepages: 00:01:08.598 ==> default: -- Management MAC: 00:01:08.598 ==> default: -- Loader: 00:01:08.598 ==> default: -- Nvram: 00:01:08.598 ==> default: -- Base box: spdk/fedora38 00:01:08.598 ==> default: -- Storage pool: default 00:01:08.598 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1721562187_7c5e696591beb3859f19.img (20G) 00:01:08.598 ==> default: -- Volume Cache: default 00:01:08.598 ==> default: -- Kernel: 00:01:08.598 ==> default: -- Initrd: 00:01:08.598 ==> default: -- Graphics Type: vnc 00:01:08.598 ==> default: -- Graphics Port: -1 00:01:08.598 ==> default: -- Graphics IP: 127.0.0.1 00:01:08.598 ==> default: -- Graphics Password: Not defined 00:01:08.598 ==> default: -- Video Type: cirrus 00:01:08.598 ==> default: -- Video VRAM: 9216 00:01:08.598 ==> default: -- Sound Type: 00:01:08.598 ==> default: -- Keymap: en-us 00:01:08.598 ==> default: -- TPM Path: 00:01:08.598 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:08.598 ==> default: -- Command line args: 00:01:08.598 ==> default: -> value=-device, 00:01:08.598 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:08.598 ==> default: -> value=-drive, 00:01:08.598 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:08.598 ==> default: -> value=-device, 00:01:08.598 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:08.598 ==> default: -> value=-device, 00:01:08.598 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:08.598 ==> default: -> value=-drive, 00:01:08.598 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:01:08.598 ==> default: -> value=-device, 00:01:08.598 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.598 ==> default: -> value=-device, 00:01:08.598 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:08.598 ==> default: -> value=-drive, 00:01:08.598 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:08.598 ==> default: -> value=-device, 00:01:08.598 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.598 ==> default: -> value=-drive, 00:01:08.598 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:08.598 ==> default: -> value=-device, 00:01:08.598 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.598 ==> default: -> value=-drive, 00:01:08.598 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:08.598 ==> default: -> value=-device, 00:01:08.598 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.598 ==> default: -> value=-device, 00:01:08.598 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:08.598 ==> default: -> value=-device, 00:01:08.598 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:08.598 ==> default: -> value=-drive, 00:01:08.598 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:08.598 ==> default: -> value=-device, 00:01:08.598 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:08.598 ==> default: Creating shared folders metadata... 00:01:08.598 ==> default: Starting domain. 00:01:09.976 ==> default: Waiting for domain to get an IP address... 00:01:28.060 ==> default: Waiting for SSH to become available... 00:01:29.440 ==> default: Configuring and enabling network interfaces... 00:01:36.001 default: SSH address: 192.168.121.32:22 00:01:36.001 default: SSH username: vagrant 00:01:36.001 default: SSH auth method: private key 00:01:37.899 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:46.002 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:01:52.570 ==> default: Mounting SSHFS shared folder... 00:01:53.942 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:01:53.942 ==> default: Checking Mount.. 00:01:55.314 ==> default: Folder Successfully Mounted! 00:01:55.314 ==> default: Running provisioner: file... 00:01:56.246 default: ~/.gitconfig => .gitconfig 00:01:57.190 00:01:57.190 SUCCESS! 00:01:57.190 00:01:57.190 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:01:57.190 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:57.190 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:01:57.190 00:01:57.198 [Pipeline] } 00:01:57.215 [Pipeline] // stage 00:01:57.223 [Pipeline] dir 00:01:57.223 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora38-libvirt 00:01:57.225 [Pipeline] { 00:01:57.238 [Pipeline] catchError 00:01:57.240 [Pipeline] { 00:01:57.253 [Pipeline] sh 00:01:57.530 + vagrant ssh-config --host vagrant 00:01:57.530 + sed -ne /^Host/,$p 00:01:57.530 + tee ssh_conf 00:02:00.051 Host vagrant 00:02:00.051 HostName 192.168.121.32 00:02:00.051 User vagrant 00:02:00.051 Port 22 00:02:00.051 UserKnownHostsFile /dev/null 00:02:00.051 StrictHostKeyChecking no 00:02:00.051 PasswordAuthentication no 00:02:00.052 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:00.052 IdentitiesOnly yes 00:02:00.052 LogLevel FATAL 00:02:00.052 ForwardAgent yes 00:02:00.052 ForwardX11 yes 00:02:00.052 00:02:00.064 [Pipeline] withEnv 00:02:00.066 [Pipeline] { 00:02:00.080 [Pipeline] sh 00:02:00.356 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:00.356 source /etc/os-release 00:02:00.356 [[ -e /image.version ]] && img=$(< /image.version) 00:02:00.356 # Minimal, systemd-like check. 00:02:00.356 if [[ -e /.dockerenv ]]; then 00:02:00.356 # Clear garbage from the node's name: 00:02:00.356 # agt-er_autotest_547-896 -> autotest_547-896 00:02:00.356 # $HOSTNAME is the actual container id 00:02:00.356 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:00.356 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:00.356 # We can assume this is a mount from a host where container is running, 00:02:00.356 # so fetch its hostname to easily identify the target swarm worker. 00:02:00.356 container="$(< /etc/hostname) ($agent)" 00:02:00.356 else 00:02:00.356 # Fallback 00:02:00.356 container=$agent 00:02:00.356 fi 00:02:00.356 fi 00:02:00.356 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:00.356 00:02:00.623 [Pipeline] } 00:02:00.642 [Pipeline] // withEnv 00:02:00.650 [Pipeline] setCustomBuildProperty 00:02:00.664 [Pipeline] stage 00:02:00.666 [Pipeline] { (Tests) 00:02:00.684 [Pipeline] sh 00:02:00.962 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:01.236 [Pipeline] sh 00:02:01.519 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:01.794 [Pipeline] timeout 00:02:01.795 Timeout set to expire in 40 min 00:02:01.796 [Pipeline] { 00:02:01.813 [Pipeline] sh 00:02:02.095 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:02.664 HEAD is now at 5fa2f5086 nvme: add lock_depth for ctrlr_lock 00:02:02.677 [Pipeline] sh 00:02:02.959 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:03.232 [Pipeline] sh 00:02:03.514 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:03.788 [Pipeline] sh 00:02:04.070 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:02:04.328 ++ readlink -f spdk_repo 00:02:04.328 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:04.328 + [[ -n /home/vagrant/spdk_repo ]] 00:02:04.328 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:04.328 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:04.328 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:04.328 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:04.328 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:04.328 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:04.328 + cd /home/vagrant/spdk_repo 00:02:04.328 + source /etc/os-release 00:02:04.328 ++ NAME='Fedora Linux' 00:02:04.328 ++ VERSION='38 (Cloud Edition)' 00:02:04.328 ++ ID=fedora 00:02:04.328 ++ VERSION_ID=38 00:02:04.328 ++ VERSION_CODENAME= 00:02:04.328 ++ PLATFORM_ID=platform:f38 00:02:04.329 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:04.329 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:04.329 ++ LOGO=fedora-logo-icon 00:02:04.329 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:04.329 ++ HOME_URL=https://fedoraproject.org/ 00:02:04.329 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:04.329 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:04.329 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:04.329 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:04.329 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:04.329 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:04.329 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:04.329 ++ SUPPORT_END=2024-05-14 00:02:04.329 ++ VARIANT='Cloud Edition' 00:02:04.329 ++ VARIANT_ID=cloud 00:02:04.329 + uname -a 00:02:04.329 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:04.329 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:04.586 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:04.844 Hugepages 00:02:04.844 node hugesize free / total 00:02:04.844 node0 1048576kB 0 / 0 00:02:04.844 node0 2048kB 0 / 0 00:02:04.844 00:02:04.844 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:05.102 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:05.102 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:05.102 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:05.102 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:05.102 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:05.102 + rm -f /tmp/spdk-ld-path 00:02:05.102 + source autorun-spdk.conf 00:02:05.102 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.102 ++ SPDK_TEST_NVME=1 00:02:05.102 ++ SPDK_TEST_FTL=1 00:02:05.102 ++ SPDK_TEST_ISAL=1 00:02:05.102 ++ SPDK_RUN_ASAN=1 00:02:05.102 ++ SPDK_RUN_UBSAN=1 00:02:05.102 ++ SPDK_TEST_XNVME=1 00:02:05.102 ++ SPDK_TEST_NVME_FDP=1 00:02:05.102 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.102 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.102 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.102 ++ RUN_NIGHTLY=1 00:02:05.102 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:05.102 + [[ -n '' ]] 00:02:05.102 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:05.102 + for M in /var/spdk/build-*-manifest.txt 00:02:05.102 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:05.102 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.102 + for M in /var/spdk/build-*-manifest.txt 00:02:05.102 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:05.102 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:05.102 ++ uname 00:02:05.102 + [[ Linux == \L\i\n\u\x ]] 00:02:05.102 + sudo dmesg -T 00:02:05.102 + sudo dmesg --clear 00:02:05.361 + dmesg_pid=6098 00:02:05.361 + sudo dmesg -Tw 00:02:05.361 + [[ Fedora Linux == FreeBSD ]] 00:02:05.361 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.361 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.361 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:05.361 + [[ -x /usr/src/fio-static/fio ]] 00:02:05.361 + export FIO_BIN=/usr/src/fio-static/fio 00:02:05.361 + FIO_BIN=/usr/src/fio-static/fio 00:02:05.361 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:05.361 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:05.361 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:05.361 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.361 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.361 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:05.361 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.361 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.361 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:05.361 Test configuration: 00:02:05.361 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.361 SPDK_TEST_NVME=1 00:02:05.361 SPDK_TEST_FTL=1 00:02:05.361 SPDK_TEST_ISAL=1 00:02:05.361 SPDK_RUN_ASAN=1 00:02:05.361 SPDK_RUN_UBSAN=1 00:02:05.361 SPDK_TEST_XNVME=1 00:02:05.361 SPDK_TEST_NVME_FDP=1 00:02:05.361 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.361 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:05.361 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:05.361 RUN_NIGHTLY=1 11:44:04 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:05.361 11:44:04 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:05.361 11:44:04 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:05.361 11:44:04 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:05.361 11:44:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.361 11:44:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.361 11:44:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.361 11:44:04 -- paths/export.sh@5 -- $ export PATH 00:02:05.361 11:44:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.361 11:44:04 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:05.361 11:44:04 -- common/autobuild_common.sh@437 -- $ date +%s 00:02:05.361 11:44:04 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721562244.XXXXXX 00:02:05.361 11:44:04 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721562244.IU5dKZ 00:02:05.361 11:44:04 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:02:05.361 11:44:04 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:02:05.361 11:44:04 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:05.361 11:44:04 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:05.361 11:44:04 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:05.361 11:44:04 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:05.361 11:44:04 -- common/autobuild_common.sh@453 -- $ get_config_params 00:02:05.361 11:44:04 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:02:05.361 11:44:04 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.361 11:44:04 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:02:05.361 11:44:04 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:02:05.361 11:44:04 -- pm/common@17 -- $ local monitor 00:02:05.361 11:44:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.361 11:44:04 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.361 11:44:04 -- pm/common@21 -- $ date +%s 00:02:05.361 11:44:04 -- pm/common@25 -- $ sleep 1 00:02:05.361 11:44:04 -- pm/common@21 -- $ date +%s 00:02:05.361 11:44:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721562244 00:02:05.361 11:44:04 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721562244 00:02:05.361 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721562244_collect-vmstat.pm.log 00:02:05.361 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721562244_collect-cpu-load.pm.log 00:02:06.295 11:44:05 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:02:06.295 11:44:05 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:06.295 11:44:05 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:06.295 11:44:05 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:06.553 11:44:05 -- spdk/autobuild.sh@16 -- $ date -u 00:02:06.553 Sun Jul 21 11:44:05 AM UTC 2024 00:02:06.553 11:44:05 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:06.553 v24.05-13-g5fa2f5086 00:02:06.553 11:44:05 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:06.553 11:44:05 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:06.553 11:44:05 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:06.553 11:44:05 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:06.553 11:44:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.553 ************************************ 00:02:06.553 START TEST asan 00:02:06.553 ************************************ 00:02:06.553 using asan 00:02:06.553 11:44:05 asan -- common/autotest_common.sh@1121 -- $ echo 'using asan' 00:02:06.553 00:02:06.553 real 0m0.001s 00:02:06.553 user 0m0.001s 00:02:06.553 sys 0m0.000s 00:02:06.553 11:44:05 asan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:06.553 11:44:05 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:06.553 ************************************ 00:02:06.553 END TEST asan 00:02:06.553 ************************************ 00:02:06.553 11:44:05 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:06.553 11:44:05 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:06.553 11:44:05 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:02:06.553 11:44:05 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:06.553 11:44:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.553 ************************************ 00:02:06.553 START TEST ubsan 00:02:06.553 ************************************ 00:02:06.553 using ubsan 00:02:06.553 11:44:05 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:02:06.553 00:02:06.553 real 0m0.000s 00:02:06.553 user 0m0.000s 00:02:06.553 sys 0m0.000s 00:02:06.553 11:44:05 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:06.553 11:44:05 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:06.553 ************************************ 00:02:06.553 END TEST ubsan 00:02:06.553 ************************************ 00:02:06.553 11:44:05 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:06.553 11:44:05 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:06.553 11:44:05 -- common/autobuild_common.sh@429 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:06.553 11:44:05 -- common/autotest_common.sh@1097 -- $ '[' 2 -le 1 ']' 00:02:06.553 11:44:05 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:02:06.553 11:44:05 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.553 ************************************ 00:02:06.553 START TEST build_native_dpdk 00:02:06.553 ************************************ 00:02:06.553 11:44:05 build_native_dpdk -- common/autotest_common.sh@1121 -- $ _build_native_dpdk 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:06.553 eeb0605f11 version: 23.11.0 00:02:06.553 238778122a doc: update release notes for 23.11 00:02:06.553 46aa6b3cfc doc: fix description of RSS features 00:02:06.553 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:06.553 7e421ae345 devtools: support skipping forbid rule check 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:06.553 11:44:05 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:06.553 patching file config/rte_config.h 00:02:06.553 Hunk #1 succeeded at 60 (offset 1 line). 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:06.553 11:44:05 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:06.810 11:44:05 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:12.077 The Meson build system 00:02:12.077 Version: 1.3.1 00:02:12.077 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:12.077 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:12.077 Build type: native build 00:02:12.077 Program cat found: YES (/usr/bin/cat) 00:02:12.077 Project name: DPDK 00:02:12.077 Project version: 23.11.0 00:02:12.077 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:12.077 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:12.077 Host machine cpu family: x86_64 00:02:12.077 Host machine cpu: x86_64 00:02:12.077 Message: ## Building in Developer Mode ## 00:02:12.077 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:12.077 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:12.077 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:12.077 Program python3 found: YES (/usr/bin/python3) 00:02:12.077 Program cat found: YES (/usr/bin/cat) 00:02:12.077 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:12.077 Compiler for C supports arguments -march=native: YES 00:02:12.077 Checking for size of "void *" : 8 00:02:12.077 Checking for size of "void *" : 8 (cached) 00:02:12.077 Library m found: YES 00:02:12.077 Library numa found: YES 00:02:12.077 Has header "numaif.h" : YES 00:02:12.077 Library fdt found: NO 00:02:12.077 Library execinfo found: NO 00:02:12.077 Has header "execinfo.h" : YES 00:02:12.077 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:12.077 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:12.077 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:12.077 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:12.077 Run-time dependency openssl found: YES 3.0.9 00:02:12.077 Run-time dependency libpcap found: YES 1.10.4 00:02:12.077 Has header "pcap.h" with dependency libpcap: YES 00:02:12.077 Compiler for C supports arguments -Wcast-qual: YES 00:02:12.077 Compiler for C supports arguments -Wdeprecated: YES 00:02:12.077 Compiler for C supports arguments -Wformat: YES 00:02:12.077 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:12.077 Compiler for C supports arguments -Wformat-security: NO 00:02:12.077 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.077 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:12.077 Compiler for C supports arguments -Wnested-externs: YES 00:02:12.077 Compiler for C supports arguments -Wold-style-definition: YES 00:02:12.077 Compiler for C supports arguments -Wpointer-arith: YES 00:02:12.077 Compiler for C supports arguments -Wsign-compare: YES 00:02:12.077 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:12.077 Compiler for C supports arguments -Wundef: YES 00:02:12.077 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.077 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:12.077 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:12.077 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.077 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:12.077 Program objdump found: YES (/usr/bin/objdump) 00:02:12.077 Compiler for C supports arguments -mavx512f: YES 00:02:12.077 Checking if "AVX512 checking" compiles: YES 00:02:12.077 Fetching value of define "__SSE4_2__" : 1 00:02:12.077 Fetching value of define "__AES__" : 1 00:02:12.077 Fetching value of define "__AVX__" : 1 00:02:12.077 Fetching value of define "__AVX2__" : 1 00:02:12.077 Fetching value of define "__AVX512BW__" : 1 00:02:12.077 Fetching value of define "__AVX512CD__" : 1 00:02:12.077 Fetching value of define "__AVX512DQ__" : 1 00:02:12.077 Fetching value of define "__AVX512F__" : 1 00:02:12.077 Fetching value of define "__AVX512VL__" : 1 00:02:12.077 Fetching value of define "__PCLMUL__" : 1 00:02:12.077 Fetching value of define "__RDRND__" : 1 00:02:12.077 Fetching value of define "__RDSEED__" : 1 00:02:12.077 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:12.077 Fetching value of define "__znver1__" : (undefined) 00:02:12.077 Fetching value of define "__znver2__" : (undefined) 00:02:12.077 Fetching value of define "__znver3__" : (undefined) 00:02:12.077 Fetching value of define "__znver4__" : (undefined) 00:02:12.077 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:12.077 Message: lib/log: Defining dependency "log" 00:02:12.077 Message: lib/kvargs: Defining dependency "kvargs" 00:02:12.077 Message: lib/telemetry: Defining dependency "telemetry" 00:02:12.077 Checking for function "getentropy" : NO 00:02:12.077 Message: lib/eal: Defining dependency "eal" 00:02:12.077 Message: lib/ring: Defining dependency "ring" 00:02:12.077 Message: lib/rcu: Defining dependency "rcu" 00:02:12.077 Message: lib/mempool: Defining dependency "mempool" 00:02:12.077 Message: lib/mbuf: Defining dependency "mbuf" 00:02:12.077 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:12.077 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.077 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.077 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:12.077 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:12.077 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:12.077 Compiler for C supports arguments -mpclmul: YES 00:02:12.077 Compiler for C supports arguments -maes: YES 00:02:12.077 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.077 Compiler for C supports arguments -mavx512bw: YES 00:02:12.077 Compiler for C supports arguments -mavx512dq: YES 00:02:12.077 Compiler for C supports arguments -mavx512vl: YES 00:02:12.077 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:12.077 Compiler for C supports arguments -mavx2: YES 00:02:12.077 Compiler for C supports arguments -mavx: YES 00:02:12.077 Message: lib/net: Defining dependency "net" 00:02:12.077 Message: lib/meter: Defining dependency "meter" 00:02:12.078 Message: lib/ethdev: Defining dependency "ethdev" 00:02:12.078 Message: lib/pci: Defining dependency "pci" 00:02:12.078 Message: lib/cmdline: Defining dependency "cmdline" 00:02:12.078 Message: lib/metrics: Defining dependency "metrics" 00:02:12.078 Message: lib/hash: Defining dependency "hash" 00:02:12.078 Message: lib/timer: Defining dependency "timer" 00:02:12.078 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.078 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:12.078 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:12.078 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.078 Message: lib/acl: Defining dependency "acl" 00:02:12.078 Message: lib/bbdev: Defining dependency "bbdev" 00:02:12.078 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:12.078 Run-time dependency libelf found: YES 0.190 00:02:12.078 Message: lib/bpf: Defining dependency "bpf" 00:02:12.078 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:12.078 Message: lib/compressdev: Defining dependency "compressdev" 00:02:12.078 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:12.078 Message: lib/distributor: Defining dependency "distributor" 00:02:12.078 Message: lib/dmadev: Defining dependency "dmadev" 00:02:12.078 Message: lib/efd: Defining dependency "efd" 00:02:12.078 Message: lib/eventdev: Defining dependency "eventdev" 00:02:12.078 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:12.078 Message: lib/gpudev: Defining dependency "gpudev" 00:02:12.078 Message: lib/gro: Defining dependency "gro" 00:02:12.078 Message: lib/gso: Defining dependency "gso" 00:02:12.078 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:12.078 Message: lib/jobstats: Defining dependency "jobstats" 00:02:12.078 Message: lib/latencystats: Defining dependency "latencystats" 00:02:12.078 Message: lib/lpm: Defining dependency "lpm" 00:02:12.078 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.078 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:12.078 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:12.078 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:12.078 Message: lib/member: Defining dependency "member" 00:02:12.078 Message: lib/pcapng: Defining dependency "pcapng" 00:02:12.078 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:12.078 Message: lib/power: Defining dependency "power" 00:02:12.078 Message: lib/rawdev: Defining dependency "rawdev" 00:02:12.078 Message: lib/regexdev: Defining dependency "regexdev" 00:02:12.078 Message: lib/mldev: Defining dependency "mldev" 00:02:12.078 Message: lib/rib: Defining dependency "rib" 00:02:12.078 Message: lib/reorder: Defining dependency "reorder" 00:02:12.078 Message: lib/sched: Defining dependency "sched" 00:02:12.078 Message: lib/security: Defining dependency "security" 00:02:12.078 Message: lib/stack: Defining dependency "stack" 00:02:12.078 Has header "linux/userfaultfd.h" : YES 00:02:12.078 Has header "linux/vduse.h" : YES 00:02:12.078 Message: lib/vhost: Defining dependency "vhost" 00:02:12.078 Message: lib/ipsec: Defining dependency "ipsec" 00:02:12.078 Message: lib/pdcp: Defining dependency "pdcp" 00:02:12.078 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.078 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:12.078 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.078 Message: lib/fib: Defining dependency "fib" 00:02:12.078 Message: lib/port: Defining dependency "port" 00:02:12.078 Message: lib/pdump: Defining dependency "pdump" 00:02:12.078 Message: lib/table: Defining dependency "table" 00:02:12.078 Message: lib/pipeline: Defining dependency "pipeline" 00:02:12.078 Message: lib/graph: Defining dependency "graph" 00:02:12.078 Message: lib/node: Defining dependency "node" 00:02:12.078 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.078 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.078 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:13.976 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:13.976 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:13.976 Compiler for C supports arguments -Wno-unused-value: YES 00:02:13.976 Compiler for C supports arguments -Wno-format: YES 00:02:13.976 Compiler for C supports arguments -Wno-format-security: YES 00:02:13.976 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:13.976 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:13.976 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:13.976 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:13.976 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:13.976 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:13.976 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:13.976 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:13.976 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:13.976 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:13.976 Has header "sys/epoll.h" : YES 00:02:13.976 Program doxygen found: YES (/usr/bin/doxygen) 00:02:13.976 Configuring doxy-api-html.conf using configuration 00:02:13.976 Configuring doxy-api-man.conf using configuration 00:02:13.976 Program mandb found: YES (/usr/bin/mandb) 00:02:13.976 Program sphinx-build found: NO 00:02:13.976 Configuring rte_build_config.h using configuration 00:02:13.976 Message: 00:02:13.976 ================= 00:02:13.976 Applications Enabled 00:02:13.976 ================= 00:02:13.976 00:02:13.976 apps: 00:02:13.976 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:13.976 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:13.976 test-pmd, test-regex, test-sad, test-security-perf, 00:02:13.976 00:02:13.976 Message: 00:02:13.976 ================= 00:02:13.976 Libraries Enabled 00:02:13.976 ================= 00:02:13.976 00:02:13.976 libs: 00:02:13.976 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:13.976 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:13.976 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:13.976 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:13.976 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:13.976 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:13.976 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:13.976 00:02:13.976 00:02:13.976 Message: 00:02:13.976 =============== 00:02:13.976 Drivers Enabled 00:02:13.976 =============== 00:02:13.976 00:02:13.976 common: 00:02:13.976 00:02:13.976 bus: 00:02:13.976 pci, vdev, 00:02:13.976 mempool: 00:02:13.976 ring, 00:02:13.976 dma: 00:02:13.976 00:02:13.976 net: 00:02:13.976 i40e, 00:02:13.976 raw: 00:02:13.976 00:02:13.976 crypto: 00:02:13.976 00:02:13.976 compress: 00:02:13.976 00:02:13.976 regex: 00:02:13.976 00:02:13.976 ml: 00:02:13.976 00:02:13.976 vdpa: 00:02:13.976 00:02:13.976 event: 00:02:13.976 00:02:13.976 baseband: 00:02:13.976 00:02:13.976 gpu: 00:02:13.976 00:02:13.976 00:02:13.976 Message: 00:02:13.976 ================= 00:02:13.976 Content Skipped 00:02:13.976 ================= 00:02:13.976 00:02:13.976 apps: 00:02:13.976 00:02:13.976 libs: 00:02:13.976 00:02:13.976 drivers: 00:02:13.976 common/cpt: not in enabled drivers build config 00:02:13.976 common/dpaax: not in enabled drivers build config 00:02:13.976 common/iavf: not in enabled drivers build config 00:02:13.976 common/idpf: not in enabled drivers build config 00:02:13.976 common/mvep: not in enabled drivers build config 00:02:13.976 common/octeontx: not in enabled drivers build config 00:02:13.976 bus/auxiliary: not in enabled drivers build config 00:02:13.976 bus/cdx: not in enabled drivers build config 00:02:13.976 bus/dpaa: not in enabled drivers build config 00:02:13.976 bus/fslmc: not in enabled drivers build config 00:02:13.976 bus/ifpga: not in enabled drivers build config 00:02:13.976 bus/platform: not in enabled drivers build config 00:02:13.976 bus/vmbus: not in enabled drivers build config 00:02:13.976 common/cnxk: not in enabled drivers build config 00:02:13.976 common/mlx5: not in enabled drivers build config 00:02:13.976 common/nfp: not in enabled drivers build config 00:02:13.976 common/qat: not in enabled drivers build config 00:02:13.976 common/sfc_efx: not in enabled drivers build config 00:02:13.976 mempool/bucket: not in enabled drivers build config 00:02:13.976 mempool/cnxk: not in enabled drivers build config 00:02:13.976 mempool/dpaa: not in enabled drivers build config 00:02:13.976 mempool/dpaa2: not in enabled drivers build config 00:02:13.976 mempool/octeontx: not in enabled drivers build config 00:02:13.976 mempool/stack: not in enabled drivers build config 00:02:13.976 dma/cnxk: not in enabled drivers build config 00:02:13.976 dma/dpaa: not in enabled drivers build config 00:02:13.976 dma/dpaa2: not in enabled drivers build config 00:02:13.976 dma/hisilicon: not in enabled drivers build config 00:02:13.976 dma/idxd: not in enabled drivers build config 00:02:13.976 dma/ioat: not in enabled drivers build config 00:02:13.976 dma/skeleton: not in enabled drivers build config 00:02:13.976 net/af_packet: not in enabled drivers build config 00:02:13.976 net/af_xdp: not in enabled drivers build config 00:02:13.976 net/ark: not in enabled drivers build config 00:02:13.976 net/atlantic: not in enabled drivers build config 00:02:13.976 net/avp: not in enabled drivers build config 00:02:13.976 net/axgbe: not in enabled drivers build config 00:02:13.976 net/bnx2x: not in enabled drivers build config 00:02:13.976 net/bnxt: not in enabled drivers build config 00:02:13.976 net/bonding: not in enabled drivers build config 00:02:13.976 net/cnxk: not in enabled drivers build config 00:02:13.976 net/cpfl: not in enabled drivers build config 00:02:13.976 net/cxgbe: not in enabled drivers build config 00:02:13.976 net/dpaa: not in enabled drivers build config 00:02:13.976 net/dpaa2: not in enabled drivers build config 00:02:13.976 net/e1000: not in enabled drivers build config 00:02:13.976 net/ena: not in enabled drivers build config 00:02:13.976 net/enetc: not in enabled drivers build config 00:02:13.976 net/enetfec: not in enabled drivers build config 00:02:13.976 net/enic: not in enabled drivers build config 00:02:13.976 net/failsafe: not in enabled drivers build config 00:02:13.976 net/fm10k: not in enabled drivers build config 00:02:13.976 net/gve: not in enabled drivers build config 00:02:13.976 net/hinic: not in enabled drivers build config 00:02:13.976 net/hns3: not in enabled drivers build config 00:02:13.976 net/iavf: not in enabled drivers build config 00:02:13.976 net/ice: not in enabled drivers build config 00:02:13.976 net/idpf: not in enabled drivers build config 00:02:13.976 net/igc: not in enabled drivers build config 00:02:13.976 net/ionic: not in enabled drivers build config 00:02:13.976 net/ipn3ke: not in enabled drivers build config 00:02:13.976 net/ixgbe: not in enabled drivers build config 00:02:13.976 net/mana: not in enabled drivers build config 00:02:13.976 net/memif: not in enabled drivers build config 00:02:13.976 net/mlx4: not in enabled drivers build config 00:02:13.976 net/mlx5: not in enabled drivers build config 00:02:13.976 net/mvneta: not in enabled drivers build config 00:02:13.976 net/mvpp2: not in enabled drivers build config 00:02:13.976 net/netvsc: not in enabled drivers build config 00:02:13.976 net/nfb: not in enabled drivers build config 00:02:13.976 net/nfp: not in enabled drivers build config 00:02:13.976 net/ngbe: not in enabled drivers build config 00:02:13.976 net/null: not in enabled drivers build config 00:02:13.976 net/octeontx: not in enabled drivers build config 00:02:13.976 net/octeon_ep: not in enabled drivers build config 00:02:13.976 net/pcap: not in enabled drivers build config 00:02:13.976 net/pfe: not in enabled drivers build config 00:02:13.976 net/qede: not in enabled drivers build config 00:02:13.976 net/ring: not in enabled drivers build config 00:02:13.976 net/sfc: not in enabled drivers build config 00:02:13.976 net/softnic: not in enabled drivers build config 00:02:13.976 net/tap: not in enabled drivers build config 00:02:13.976 net/thunderx: not in enabled drivers build config 00:02:13.976 net/txgbe: not in enabled drivers build config 00:02:13.976 net/vdev_netvsc: not in enabled drivers build config 00:02:13.976 net/vhost: not in enabled drivers build config 00:02:13.976 net/virtio: not in enabled drivers build config 00:02:13.976 net/vmxnet3: not in enabled drivers build config 00:02:13.976 raw/cnxk_bphy: not in enabled drivers build config 00:02:13.976 raw/cnxk_gpio: not in enabled drivers build config 00:02:13.976 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:13.976 raw/ifpga: not in enabled drivers build config 00:02:13.976 raw/ntb: not in enabled drivers build config 00:02:13.976 raw/skeleton: not in enabled drivers build config 00:02:13.976 crypto/armv8: not in enabled drivers build config 00:02:13.976 crypto/bcmfs: not in enabled drivers build config 00:02:13.976 crypto/caam_jr: not in enabled drivers build config 00:02:13.976 crypto/ccp: not in enabled drivers build config 00:02:13.976 crypto/cnxk: not in enabled drivers build config 00:02:13.976 crypto/dpaa_sec: not in enabled drivers build config 00:02:13.976 crypto/dpaa2_sec: not in enabled drivers build config 00:02:13.976 crypto/ipsec_mb: not in enabled drivers build config 00:02:13.976 crypto/mlx5: not in enabled drivers build config 00:02:13.976 crypto/mvsam: not in enabled drivers build config 00:02:13.976 crypto/nitrox: not in enabled drivers build config 00:02:13.976 crypto/null: not in enabled drivers build config 00:02:13.976 crypto/octeontx: not in enabled drivers build config 00:02:13.976 crypto/openssl: not in enabled drivers build config 00:02:13.976 crypto/scheduler: not in enabled drivers build config 00:02:13.976 crypto/uadk: not in enabled drivers build config 00:02:13.976 crypto/virtio: not in enabled drivers build config 00:02:13.976 compress/isal: not in enabled drivers build config 00:02:13.976 compress/mlx5: not in enabled drivers build config 00:02:13.976 compress/octeontx: not in enabled drivers build config 00:02:13.976 compress/zlib: not in enabled drivers build config 00:02:13.976 regex/mlx5: not in enabled drivers build config 00:02:13.976 regex/cn9k: not in enabled drivers build config 00:02:13.976 ml/cnxk: not in enabled drivers build config 00:02:13.976 vdpa/ifc: not in enabled drivers build config 00:02:13.976 vdpa/mlx5: not in enabled drivers build config 00:02:13.976 vdpa/nfp: not in enabled drivers build config 00:02:13.976 vdpa/sfc: not in enabled drivers build config 00:02:13.976 event/cnxk: not in enabled drivers build config 00:02:13.976 event/dlb2: not in enabled drivers build config 00:02:13.976 event/dpaa: not in enabled drivers build config 00:02:13.976 event/dpaa2: not in enabled drivers build config 00:02:13.976 event/dsw: not in enabled drivers build config 00:02:13.976 event/opdl: not in enabled drivers build config 00:02:13.976 event/skeleton: not in enabled drivers build config 00:02:13.976 event/sw: not in enabled drivers build config 00:02:13.976 event/octeontx: not in enabled drivers build config 00:02:13.976 baseband/acc: not in enabled drivers build config 00:02:13.976 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:13.976 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:13.976 baseband/la12xx: not in enabled drivers build config 00:02:13.976 baseband/null: not in enabled drivers build config 00:02:13.976 baseband/turbo_sw: not in enabled drivers build config 00:02:13.976 gpu/cuda: not in enabled drivers build config 00:02:13.976 00:02:13.976 00:02:13.976 Build targets in project: 217 00:02:13.976 00:02:13.976 DPDK 23.11.0 00:02:13.976 00:02:13.976 User defined options 00:02:13.976 libdir : lib 00:02:13.976 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:13.976 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:13.976 c_link_args : 00:02:13.976 enable_docs : false 00:02:13.976 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:13.976 enable_kmods : false 00:02:13.976 machine : native 00:02:13.976 tests : false 00:02:13.976 00:02:13.976 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:13.976 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:13.976 11:44:12 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:13.976 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:13.976 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:13.976 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:13.976 [3/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:13.976 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:13.976 [5/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:13.976 [6/707] Linking static target lib/librte_kvargs.a 00:02:13.976 [7/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:13.976 [8/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:13.976 [9/707] Linking static target lib/librte_log.a 00:02:13.976 [10/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:14.233 [11/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.233 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:14.233 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:14.233 [14/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:14.233 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:14.490 [16/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:14.490 [17/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.490 [18/707] Linking target lib/librte_log.so.24.0 00:02:14.490 [19/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:14.490 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:14.490 [21/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:14.490 [22/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:14.490 [23/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:14.747 [24/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:14.747 [25/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:14.747 [26/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:14.747 [27/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:14.747 [28/707] Linking target lib/librte_kvargs.so.24.0 00:02:14.747 [29/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:15.004 [30/707] Linking static target lib/librte_telemetry.a 00:02:15.004 [31/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:15.004 [32/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:15.004 [33/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:15.004 [34/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:15.004 [35/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:15.004 [36/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:15.004 [37/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:15.004 [38/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:15.261 [39/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:15.261 [40/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:15.261 [41/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:15.261 [42/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.261 [43/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:15.261 [44/707] Linking target lib/librte_telemetry.so.24.0 00:02:15.261 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:15.518 [46/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:15.518 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:15.518 [48/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:15.518 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:15.518 [50/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:15.518 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:15.776 [52/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:15.776 [53/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:15.776 [54/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:15.776 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:15.776 [56/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:15.776 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:15.776 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:16.034 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.034 [60/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:16.034 [61/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.034 [62/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:16.035 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.035 [64/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:16.035 [65/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:16.035 [66/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:16.035 [67/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.035 [68/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.293 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.293 [70/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.293 [71/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:16.293 [72/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.293 [73/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.293 [74/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.293 [75/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.293 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.551 [77/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:16.551 [78/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.812 [79/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.812 [80/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.812 [81/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.812 [82/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.812 [83/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.812 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.812 [85/707] Linking static target lib/librte_ring.a 00:02:16.812 [86/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:17.074 [87/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:17.074 [88/707] Linking static target lib/librte_eal.a 00:02:17.074 [89/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.074 [90/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:17.331 [91/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:17.331 [92/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:17.331 [93/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:17.331 [94/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:17.331 [95/707] Linking static target lib/librte_mempool.a 00:02:17.590 [96/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:17.590 [97/707] Linking static target lib/librte_rcu.a 00:02:17.590 [98/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:17.590 [99/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:17.590 [100/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:17.590 [101/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:17.849 [102/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:17.849 [103/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:17.849 [104/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:17.849 [105/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.849 [106/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.849 [107/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:17.849 [108/707] Linking static target lib/librte_net.a 00:02:18.108 [109/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:18.108 [110/707] Linking static target lib/librte_meter.a 00:02:18.108 [111/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:18.108 [112/707] Linking static target lib/librte_mbuf.a 00:02:18.108 [113/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:18.108 [114/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.364 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:18.365 [116/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.365 [117/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:18.365 [118/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:18.622 [119/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.622 [120/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:18.622 [121/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:19.187 [122/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:19.187 [123/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:19.187 [124/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:19.187 [125/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:19.187 [126/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:19.187 [127/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:19.187 [128/707] Linking static target lib/librte_pci.a 00:02:19.187 [129/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:19.187 [130/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:19.444 [131/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:19.444 [132/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:19.444 [133/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.444 [134/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:19.444 [135/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:19.444 [136/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:19.444 [137/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:19.444 [138/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:19.444 [139/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:19.444 [140/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:19.701 [141/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:19.701 [142/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:19.701 [143/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:19.701 [144/707] Linking static target lib/librte_cmdline.a 00:02:19.701 [145/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:19.957 [146/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:19.957 [147/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:19.957 [148/707] Linking static target lib/librte_metrics.a 00:02:19.957 [149/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:20.215 [150/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:20.215 [151/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.472 [152/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.472 [153/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:20.472 [154/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:20.472 [155/707] Linking static target lib/librte_timer.a 00:02:20.730 [156/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:20.730 [157/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:20.730 [158/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.986 [159/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:20.986 [160/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:21.245 [161/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:21.245 [162/707] Linking static target lib/librte_bitratestats.a 00:02:21.245 [163/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:21.504 [164/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.504 [165/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:21.504 [166/707] Linking static target lib/librte_bbdev.a 00:02:21.764 [167/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:21.764 [168/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:22.023 [169/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.023 [170/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:22.283 [171/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:22.283 [172/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:22.283 [173/707] Linking static target lib/librte_hash.a 00:02:22.283 [174/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:22.283 [175/707] Linking static target lib/librte_ethdev.a 00:02:22.283 [176/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.541 [177/707] Linking target lib/librte_eal.so.24.0 00:02:22.541 [178/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:22.541 [179/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:22.541 [180/707] Linking static target lib/acl/libavx2_tmp.a 00:02:22.541 [181/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:22.541 [182/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:22.541 [183/707] Linking target lib/librte_ring.so.24.0 00:02:22.541 [184/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:22.541 [185/707] Linking target lib/librte_meter.so.24.0 00:02:22.799 [186/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.799 [187/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:22.799 [188/707] Linking target lib/librte_pci.so.24.0 00:02:22.799 [189/707] Linking target lib/librte_rcu.so.24.0 00:02:22.799 [190/707] Linking target lib/librte_mempool.so.24.0 00:02:22.799 [191/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:22.799 [192/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:22.799 [193/707] Linking target lib/librte_timer.so.24.0 00:02:22.799 [194/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:22.799 [195/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:22.799 [196/707] Linking static target lib/librte_cfgfile.a 00:02:22.799 [197/707] Linking target lib/librte_mbuf.so.24.0 00:02:22.799 [198/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:23.056 [199/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:23.056 [200/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:23.056 [201/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:23.056 [202/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:23.056 [203/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:23.056 [204/707] Linking target lib/librte_net.so.24.0 00:02:23.056 [205/707] Linking target lib/librte_bbdev.so.24.0 00:02:23.314 [206/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:23.314 [207/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.314 [208/707] Linking target lib/librte_cmdline.so.24.0 00:02:23.314 [209/707] Linking target lib/librte_hash.so.24.0 00:02:23.314 [210/707] Linking target lib/librte_cfgfile.so.24.0 00:02:23.314 [211/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:23.314 [212/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:23.314 [213/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:23.314 [214/707] Linking static target lib/librte_bpf.a 00:02:23.572 [215/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:23.572 [216/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:23.572 [217/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:23.572 [218/707] Linking static target lib/librte_compressdev.a 00:02:23.572 [219/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:23.572 [220/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.572 [221/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:23.572 [222/707] Linking static target lib/librte_acl.a 00:02:23.829 [223/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:23.829 [224/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:23.829 [225/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.829 [226/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.829 [227/707] Linking target lib/librte_compressdev.so.24.0 00:02:24.087 [228/707] Linking target lib/librte_acl.so.24.0 00:02:24.087 [229/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.087 [230/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:24.087 [231/707] Linking static target lib/librte_distributor.a 00:02:24.087 [232/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:24.346 [233/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:24.346 [234/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:24.346 [235/707] Linking static target lib/librte_dmadev.a 00:02:24.346 [236/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.346 [237/707] Linking target lib/librte_distributor.so.24.0 00:02:24.604 [238/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.604 [239/707] Linking target lib/librte_dmadev.so.24.0 00:02:24.604 [240/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:24.604 [241/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:24.862 [242/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:24.862 [243/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:24.862 [244/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:24.862 [245/707] Linking static target lib/librte_efd.a 00:02:25.121 [246/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.121 [247/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:25.121 [248/707] Linking target lib/librte_efd.so.24.0 00:02:25.379 [249/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:25.379 [250/707] Linking static target lib/librte_dispatcher.a 00:02:25.379 [251/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.379 [252/707] Linking static target lib/librte_cryptodev.a 00:02:25.639 [253/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:25.639 [254/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:25.639 [255/707] Linking static target lib/librte_gpudev.a 00:02:25.639 [256/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.899 [257/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:25.899 [258/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:25.899 [259/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:26.158 [260/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:26.158 [261/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:26.158 [262/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.417 [263/707] Linking target lib/librte_gpudev.so.24.0 00:02:26.417 [264/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:26.417 [265/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.417 [266/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:26.417 [267/707] Linking target lib/librte_ethdev.so.24.0 00:02:26.417 [268/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:26.417 [269/707] Linking static target lib/librte_gro.a 00:02:26.417 [270/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:26.417 [271/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.417 [272/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:26.417 [273/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:26.675 [274/707] Linking target lib/librte_cryptodev.so.24.0 00:02:26.675 [275/707] Linking target lib/librte_metrics.so.24.0 00:02:26.675 [276/707] Linking target lib/librte_bpf.so.24.0 00:02:26.675 [277/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:26.675 [278/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.675 [279/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:26.675 [280/707] Linking target lib/librte_gro.so.24.0 00:02:26.675 [281/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:26.675 [282/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:26.675 [283/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:26.675 [284/707] Linking static target lib/librte_eventdev.a 00:02:26.675 [285/707] Linking target lib/librte_bitratestats.so.24.0 00:02:26.675 [286/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:26.675 [287/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:26.933 [288/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:26.933 [289/707] Linking static target lib/librte_gso.a 00:02:26.933 [290/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.933 [291/707] Linking target lib/librte_gso.so.24.0 00:02:27.191 [292/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:27.191 [293/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:27.191 [294/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:27.191 [295/707] Linking static target lib/librte_jobstats.a 00:02:27.191 [296/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:27.191 [297/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:27.191 [298/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:27.191 [299/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:27.191 [300/707] Linking static target lib/librte_latencystats.a 00:02:27.191 [301/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:27.191 [302/707] Linking static target lib/librte_ip_frag.a 00:02:27.450 [303/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.450 [304/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.450 [305/707] Linking target lib/librte_jobstats.so.24.0 00:02:27.450 [306/707] Linking target lib/librte_latencystats.so.24.0 00:02:27.450 [307/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.450 [308/707] Linking target lib/librte_ip_frag.so.24.0 00:02:27.706 [309/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:27.706 [310/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:27.706 [311/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:27.706 [312/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:27.706 [313/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:27.706 [314/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:27.706 [315/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:27.706 [316/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:27.963 [317/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:27.963 [318/707] Linking static target lib/librte_lpm.a 00:02:27.963 [319/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:28.226 [320/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:28.226 [321/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:28.226 [322/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:28.226 [323/707] Linking static target lib/librte_pcapng.a 00:02:28.226 [324/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:28.226 [325/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.226 [326/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:28.226 [327/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:28.226 [328/707] Linking target lib/librte_lpm.so.24.0 00:02:28.495 [329/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.495 [330/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.495 [331/707] Linking target lib/librte_pcapng.so.24.0 00:02:28.495 [332/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:28.495 [333/707] Linking target lib/librte_eventdev.so.24.0 00:02:28.495 [334/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:28.495 [335/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:28.495 [336/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:28.495 [337/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:28.495 [338/707] Linking target lib/librte_dispatcher.so.24.0 00:02:28.752 [339/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:28.752 [340/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:28.752 [341/707] Linking static target lib/librte_power.a 00:02:28.752 [342/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:28.752 [343/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:28.752 [344/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:28.752 [345/707] Linking static target lib/librte_regexdev.a 00:02:28.752 [346/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:28.752 [347/707] Linking static target lib/librte_rawdev.a 00:02:29.010 [348/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:29.010 [349/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:29.010 [350/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:29.010 [351/707] Linking static target lib/librte_mldev.a 00:02:29.010 [352/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:29.010 [353/707] Linking static target lib/librte_member.a 00:02:29.267 [354/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.267 [355/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:29.267 [356/707] Linking target lib/librte_rawdev.so.24.0 00:02:29.267 [357/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.267 [358/707] Linking target lib/librte_power.so.24.0 00:02:29.267 [359/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.267 [360/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:29.267 [361/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:29.267 [362/707] Linking target lib/librte_member.so.24.0 00:02:29.267 [363/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.525 [364/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:29.525 [365/707] Linking static target lib/librte_reorder.a 00:02:29.525 [366/707] Linking target lib/librte_regexdev.so.24.0 00:02:29.525 [367/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:29.525 [368/707] Linking static target lib/librte_rib.a 00:02:29.525 [369/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:29.525 [370/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:29.783 [371/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:29.783 [372/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.783 [373/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:29.783 [374/707] Linking target lib/librte_reorder.so.24.0 00:02:29.783 [375/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:29.783 [376/707] Linking static target lib/librte_stack.a 00:02:29.783 [377/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:29.783 [378/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.783 [379/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:29.783 [380/707] Linking static target lib/librte_security.a 00:02:29.783 [381/707] Linking target lib/librte_rib.so.24.0 00:02:29.783 [382/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.041 [383/707] Linking target lib/librte_stack.so.24.0 00:02:30.041 [384/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:30.041 [385/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.041 [386/707] Linking target lib/librte_mldev.so.24.0 00:02:30.041 [387/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:30.041 [388/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:30.300 [389/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.300 [390/707] Linking target lib/librte_security.so.24.0 00:02:30.300 [391/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:30.300 [392/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:30.558 [393/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:30.558 [394/707] Linking static target lib/librte_sched.a 00:02:30.558 [395/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:30.558 [396/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:30.816 [397/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.816 [398/707] Linking target lib/librte_sched.so.24.0 00:02:30.816 [399/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:30.816 [400/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:30.816 [401/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:31.074 [402/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:31.074 [403/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:31.333 [404/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:31.333 [405/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:31.333 [406/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:31.333 [407/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:31.591 [408/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:31.591 [409/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:31.591 [410/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:31.849 [411/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:31.849 [412/707] Linking static target lib/librte_ipsec.a 00:02:31.849 [413/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:31.849 [414/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:32.107 [415/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:32.107 [416/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.107 [417/707] Linking target lib/librte_ipsec.so.24.0 00:02:32.107 [418/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:32.107 [419/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:32.365 [420/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:32.623 [421/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:32.623 [422/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:32.623 [423/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:32.623 [424/707] Linking static target lib/librte_fib.a 00:02:32.884 [425/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:32.884 [426/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:32.884 [427/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.884 [428/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:32.884 [429/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:33.146 [430/707] Linking target lib/librte_fib.so.24.0 00:02:33.146 [431/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:33.146 [432/707] Linking static target lib/librte_pdcp.a 00:02:33.712 [433/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.712 [434/707] Linking target lib/librte_pdcp.so.24.0 00:02:33.712 [435/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:33.712 [436/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:33.712 [437/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:33.712 [438/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:33.970 [439/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:33.970 [440/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:33.970 [441/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:34.228 [442/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:34.228 [443/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:34.486 [444/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:34.486 [445/707] Linking static target lib/librte_port.a 00:02:34.486 [446/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:34.486 [447/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:34.486 [448/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:34.744 [449/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:34.744 [450/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:34.744 [451/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:34.744 [452/707] Linking static target lib/librte_pdump.a 00:02:34.744 [453/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:35.002 [454/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.002 [455/707] Linking target lib/librte_port.so.24.0 00:02:35.002 [456/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.002 [457/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:35.002 [458/707] Linking target lib/librte_pdump.so.24.0 00:02:35.260 [459/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:35.260 [460/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:35.518 [461/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:35.518 [462/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:35.518 [463/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:35.518 [464/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:35.777 [465/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:35.777 [466/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:35.777 [467/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:35.777 [468/707] Linking static target lib/librte_table.a 00:02:36.341 [469/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:36.341 [470/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:36.341 [471/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:36.341 [472/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.598 [473/707] Linking target lib/librte_table.so.24.0 00:02:36.598 [474/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:36.855 [475/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:36.855 [476/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:36.855 [477/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:36.855 [478/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:37.112 [479/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:37.112 [480/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:37.369 [481/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:37.369 [482/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:37.369 [483/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:37.626 [484/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:37.626 [485/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:37.626 [486/707] Linking static target lib/librte_graph.a 00:02:37.891 [487/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:37.891 [488/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:38.155 [489/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:38.155 [490/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:38.155 [491/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:38.414 [492/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.414 [493/707] Linking target lib/librte_graph.so.24.0 00:02:38.414 [494/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:38.414 [495/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:38.672 [496/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:38.672 [497/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:38.930 [498/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:38.930 [499/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:38.930 [500/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:39.189 [501/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:39.189 [502/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:39.189 [503/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:39.189 [504/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:39.446 [505/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:39.447 [506/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:39.447 [507/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:39.447 [508/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:39.447 [509/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:39.704 [510/707] Linking static target lib/librte_node.a 00:02:39.704 [511/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:39.704 [512/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:39.962 [513/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.962 [514/707] Linking target lib/librte_node.so.24.0 00:02:39.962 [515/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:39.962 [516/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:39.962 [517/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:39.962 [518/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:40.221 [519/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:40.221 [520/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:40.221 [521/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:40.221 [522/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:40.221 [523/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:40.221 [524/707] Linking static target drivers/librte_bus_pci.a 00:02:40.221 [525/707] Linking static target drivers/librte_bus_vdev.a 00:02:40.221 [526/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:40.479 [527/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:40.479 [528/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:40.479 [529/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:40.479 [530/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.479 [531/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:40.479 [532/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:40.479 [533/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:40.738 [534/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:40.738 [535/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.738 [536/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:40.738 [537/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:40.738 [538/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:40.738 [539/707] Linking static target drivers/librte_mempool_ring.a 00:02:40.738 [540/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:40.738 [541/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:40.738 [542/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:40.997 [543/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:41.257 [544/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:41.516 [545/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:41.775 [546/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:41.775 [547/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:42.343 [548/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:42.602 [549/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:42.860 [550/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:42.861 [551/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:42.861 [552/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:42.861 [553/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:42.861 [554/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:43.118 [555/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:43.378 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:43.378 [557/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:43.639 [558/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:43.639 [559/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:43.897 [560/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:44.155 [561/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:44.155 [562/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:44.155 [563/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:44.413 [564/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:44.671 [565/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:44.671 [566/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:44.671 [567/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:44.930 [568/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:44.930 [569/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:44.930 [570/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:45.187 [571/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:45.443 [572/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:45.700 [573/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:45.700 [574/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:45.957 [575/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:45.957 [576/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:45.957 [577/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:45.957 [578/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:46.213 [579/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:46.213 [580/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:46.213 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:46.470 [582/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:46.470 [583/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:46.727 [584/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:46.984 [585/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:46.984 [586/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:46.984 [587/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:46.984 [588/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:46.984 [589/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:46.984 [590/707] Linking static target drivers/librte_net_i40e.a 00:02:47.241 [591/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:47.498 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:47.498 [593/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:47.498 [594/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:47.755 [595/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:47.755 [596/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:48.012 [597/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:48.012 [598/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.012 [599/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:48.269 [600/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:48.269 [601/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:48.526 [602/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:48.789 [603/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:48.789 [604/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:49.047 [605/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:49.047 [606/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:49.047 [607/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:49.047 [608/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:49.047 [609/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:49.305 [610/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:49.305 [611/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:49.562 [612/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:49.562 [613/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:49.562 [614/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:49.562 [615/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:49.819 [616/707] Linking static target lib/librte_vhost.a 00:02:49.819 [617/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:49.819 [618/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:50.749 [619/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.749 [620/707] Linking target lib/librte_vhost.so.24.0 00:02:51.007 [621/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:51.007 [622/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:51.007 [623/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:51.264 [624/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:51.264 [625/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:51.521 [626/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:51.521 [627/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:51.521 [628/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:51.521 [629/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:51.521 [630/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:51.778 [631/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:51.778 [632/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:52.035 [633/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:52.035 [634/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:52.035 [635/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:52.035 [636/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:52.035 [637/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:52.293 [638/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:52.551 [639/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:52.551 [640/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:52.551 [641/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:52.551 [642/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:52.841 [643/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:52.841 [644/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:52.841 [645/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:52.841 [646/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:53.099 [647/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:53.099 [648/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:53.099 [649/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:53.099 [650/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:53.356 [651/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:53.356 [652/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:53.614 [653/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:53.614 [654/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:53.614 [655/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:53.614 [656/707] Linking static target lib/librte_pipeline.a 00:02:53.872 [657/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:53.872 [658/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:54.130 [659/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:54.130 [660/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:54.130 [661/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:54.389 [662/707] Linking target app/dpdk-dumpcap 00:02:54.389 [663/707] Linking target app/dpdk-graph 00:02:54.389 [664/707] Linking target app/dpdk-pdump 00:02:54.654 [665/707] Linking target app/dpdk-proc-info 00:02:54.654 [666/707] Linking target app/dpdk-test-bbdev 00:02:54.919 [667/707] Linking target app/dpdk-test-acl 00:02:54.919 [668/707] Linking target app/dpdk-test-cmdline 00:02:54.919 [669/707] Linking target app/dpdk-test-compress-perf 00:02:55.179 [670/707] Linking target app/dpdk-test-crypto-perf 00:02:55.179 [671/707] Linking target app/dpdk-test-dma-perf 00:02:55.179 [672/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:55.437 [673/707] Linking target app/dpdk-test-fib 00:02:55.437 [674/707] Linking target app/dpdk-test-flow-perf 00:02:55.694 [675/707] Linking target app/dpdk-test-gpudev 00:02:55.694 [676/707] Linking target app/dpdk-test-eventdev 00:02:55.694 [677/707] Linking target app/dpdk-test-pipeline 00:02:55.694 [678/707] Linking target app/dpdk-test-mldev 00:02:55.952 [679/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:56.210 [680/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:56.210 [681/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:56.210 [682/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:56.210 [683/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:56.468 [684/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:56.468 [685/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:56.727 [686/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:56.727 [687/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:56.727 [688/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:56.727 [689/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:56.984 [690/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.984 [691/707] Linking target lib/librte_pipeline.so.24.0 00:02:57.242 [692/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:57.242 [693/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:57.242 [694/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:57.502 [695/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:57.760 [696/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:57.760 [697/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:57.760 [698/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:57.760 [699/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:58.019 [700/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:58.278 [701/707] Linking target app/dpdk-test-sad 00:02:58.278 [702/707] Linking target app/dpdk-test-regex 00:02:58.536 [703/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:58.536 [704/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:58.536 [705/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:58.794 [706/707] Linking target app/dpdk-test-security-perf 00:02:59.053 [707/707] Linking target app/dpdk-testpmd 00:02:59.053 11:44:57 build_native_dpdk -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:02:59.053 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:59.053 [0/1] Installing files. 00:02:59.313 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:02:59.313 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.314 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:59.315 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:59.316 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.317 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.318 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.319 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.319 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.319 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:59.319 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.319 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.319 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.319 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.319 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.319 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:59.319 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:59.319 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:02:59.319 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:59.319 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:59.319 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.319 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.578 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.579 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.840 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.840 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.840 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.840 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:59.840 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.840 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:59.840 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.840 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:59.840 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:02:59.840 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:02:59.840 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.840 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.841 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.842 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:59.843 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:02:59.843 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:02:59.843 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:02:59.843 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:02:59.843 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:02:59.843 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:02:59.843 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:02:59.843 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:02:59.843 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:02:59.843 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:02:59.843 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:02:59.843 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:02:59.843 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:02:59.843 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:02:59.843 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:02:59.843 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:02:59.843 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:02:59.843 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:02:59.843 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:02:59.843 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:02:59.843 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:02:59.843 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:02:59.843 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:02:59.843 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:02:59.843 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:02:59.843 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:02:59.843 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:02:59.843 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:02:59.843 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:02:59.843 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:02:59.843 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:02:59.843 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:02:59.843 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:02:59.843 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:02:59.843 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:02:59.843 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:02:59.843 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:02:59.843 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:02:59.843 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:02:59.843 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:02:59.843 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:02:59.843 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:02:59.843 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:02:59.843 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:02:59.843 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:02:59.843 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:02:59.843 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:02:59.843 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:02:59.843 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:02:59.843 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:02:59.843 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:02:59.843 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:02:59.843 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:02:59.843 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:02:59.843 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:02:59.843 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:02:59.843 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:02:59.843 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:02:59.843 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:02:59.843 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:02:59.843 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:02:59.843 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:02:59.843 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:02:59.843 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:02:59.843 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:02:59.843 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:02:59.843 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:02:59.843 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:02:59.843 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:02:59.843 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:02:59.843 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:02:59.843 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:02:59.843 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:02:59.843 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:02:59.843 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:02:59.843 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:02:59.843 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:02:59.843 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:02:59.843 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:02:59.843 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:02:59.843 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:02:59.843 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:02:59.843 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:02:59.843 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:02:59.843 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:02:59.843 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:02:59.843 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:02:59.843 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:02:59.843 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:02:59.843 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:02:59.843 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:02:59.843 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:59.843 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:59.843 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:59.843 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:59.843 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:59.843 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:59.843 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:59.843 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:59.843 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:59.843 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:59.843 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:59.843 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:59.843 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:02:59.843 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:02:59.843 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:02:59.843 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:02:59.843 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:02:59.843 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:02:59.843 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:02:59.843 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:02:59.843 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:02:59.843 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:02:59.843 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:02:59.843 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:02:59.843 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:02:59.843 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:02:59.843 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:02:59.843 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:02:59.843 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:02:59.843 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:02:59.843 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:02:59.843 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:02:59.843 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:02:59.843 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:02:59.843 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:59.843 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:59.843 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:59.843 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:59.843 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:59.843 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:59.844 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:59.844 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:59.844 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:59.844 11:44:58 build_native_dpdk -- common/autobuild_common.sh@189 -- $ uname -s 00:02:59.844 11:44:58 build_native_dpdk -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:59.844 11:44:58 build_native_dpdk -- common/autobuild_common.sh@200 -- $ cat 00:02:59.844 11:44:58 build_native_dpdk -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:59.844 00:02:59.844 real 0m53.333s 00:02:59.844 user 6m22.519s 00:02:59.844 sys 0m57.370s 00:02:59.844 11:44:58 build_native_dpdk -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:02:59.844 11:44:58 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:59.844 ************************************ 00:02:59.844 END TEST build_native_dpdk 00:02:59.844 ************************************ 00:03:00.102 11:44:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:00.102 11:44:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:00.102 11:44:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:00.102 11:44:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:00.102 11:44:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:00.102 11:44:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:00.102 11:44:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:00.102 11:44:58 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme --with-shared 00:03:00.102 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:00.372 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:00.372 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:00.372 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:00.642 Using 'verbs' RDMA provider 00:03:16.896 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:31.783 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:32.352 Creating mk/config.mk...done. 00:03:32.352 Creating mk/cc.flags.mk...done. 00:03:32.352 Type 'make' to build. 00:03:32.352 11:45:31 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:32.352 11:45:31 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:03:32.352 11:45:31 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:03:32.352 11:45:31 -- common/autotest_common.sh@10 -- $ set +x 00:03:32.352 ************************************ 00:03:32.352 START TEST make 00:03:32.352 ************************************ 00:03:32.352 11:45:31 make -- common/autotest_common.sh@1121 -- $ make -j10 00:03:32.921 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:32.921 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:32.921 meson setup builddir \ 00:03:32.921 -Dwith-libaio=enabled \ 00:03:32.921 -Dwith-liburing=enabled \ 00:03:32.921 -Dwith-libvfn=disabled \ 00:03:32.921 -Dwith-spdk=false && \ 00:03:32.921 meson compile -C builddir && \ 00:03:32.921 cd -) 00:03:32.921 make[1]: Nothing to be done for 'all'. 00:03:34.837 The Meson build system 00:03:34.837 Version: 1.3.1 00:03:34.837 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:34.837 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:34.837 Build type: native build 00:03:34.837 Project name: xnvme 00:03:34.837 Project version: 0.7.3 00:03:34.837 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:34.837 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:34.837 Host machine cpu family: x86_64 00:03:34.837 Host machine cpu: x86_64 00:03:34.837 Message: host_machine.system: linux 00:03:34.837 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:34.837 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:34.837 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:34.837 Run-time dependency threads found: YES 00:03:34.837 Has header "setupapi.h" : NO 00:03:34.837 Has header "linux/blkzoned.h" : YES 00:03:34.837 Has header "linux/blkzoned.h" : YES (cached) 00:03:34.837 Has header "libaio.h" : YES 00:03:34.837 Library aio found: YES 00:03:34.837 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:34.837 Run-time dependency liburing found: YES 2.2 00:03:34.837 Dependency libvfn skipped: feature with-libvfn disabled 00:03:34.837 Run-time dependency appleframeworks found: NO (tried framework) 00:03:34.837 Run-time dependency appleframeworks found: NO (tried framework) 00:03:34.837 Configuring xnvme_config.h using configuration 00:03:34.837 Configuring xnvme.spec using configuration 00:03:34.837 Run-time dependency bash-completion found: YES 2.11 00:03:34.837 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:34.837 Program cp found: YES (/usr/bin/cp) 00:03:34.837 Has header "winsock2.h" : NO 00:03:34.837 Has header "dbghelp.h" : NO 00:03:34.837 Library rpcrt4 found: NO 00:03:34.837 Library rt found: YES 00:03:34.837 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:34.837 Found CMake: /usr/bin/cmake (3.27.7) 00:03:34.837 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:03:34.837 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:03:34.837 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:03:34.837 Build targets in project: 32 00:03:34.837 00:03:34.837 xnvme 0.7.3 00:03:34.837 00:03:34.837 User defined options 00:03:34.837 with-libaio : enabled 00:03:34.837 with-liburing: enabled 00:03:34.837 with-libvfn : disabled 00:03:34.837 with-spdk : false 00:03:34.837 00:03:34.837 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:35.406 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:35.406 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:03:35.406 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:03:35.406 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:03:35.406 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:03:35.406 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:03:35.406 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:03:35.406 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:03:35.406 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:03:35.406 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:03:35.406 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:03:35.406 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:03:35.406 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:03:35.666 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:03:35.666 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:03:35.666 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:03:35.666 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:03:35.666 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:03:35.666 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:03:35.666 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:03:35.666 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:03:35.666 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:03:35.666 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:03:35.666 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:03:35.666 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:03:35.666 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:03:35.666 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:03:35.666 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:03:35.666 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:03:35.666 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:03:35.666 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:03:35.666 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:03:35.666 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:03:35.666 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:03:35.666 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:03:35.666 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:03:35.666 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:03:35.666 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:03:35.666 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:03:35.666 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:03:35.925 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:03:35.925 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:03:35.925 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:03:35.925 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:03:35.925 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:03:35.925 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:03:35.925 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:03:35.925 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:03:35.925 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:03:35.925 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:03:35.925 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:03:35.925 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:03:35.925 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:03:35.925 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:03:35.925 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:03:35.925 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:03:35.925 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:03:35.925 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:03:35.925 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:03:35.925 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:03:35.925 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:03:35.925 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:03:35.925 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:03:35.925 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:03:35.925 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:03:35.925 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:03:36.185 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:03:36.185 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:03:36.185 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:03:36.185 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:03:36.185 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:03:36.185 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:03:36.185 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:03:36.185 [73/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:03:36.185 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:03:36.185 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:03:36.185 [76/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:03:36.185 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:03:36.185 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:03:36.185 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:03:36.185 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:03:36.185 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:03:36.185 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:03:36.450 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:03:36.450 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:03:36.450 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:03:36.450 [86/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:03:36.450 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:03:36.450 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:03:36.450 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:03:36.450 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:03:36.450 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:03:36.450 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:03:36.450 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:03:36.450 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:03:36.450 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:03:36.450 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:03:36.450 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:03:36.450 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:03:36.450 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:03:36.450 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:03:36.450 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:03:36.450 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:03:36.450 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:03:36.450 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:03:36.450 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:03:36.715 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:03:36.715 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:03:36.715 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:03:36.715 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:03:36.715 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:03:36.715 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:03:36.715 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:03:36.715 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:03:36.715 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:03:36.715 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:03:36.715 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:03:36.715 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:03:36.715 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:03:36.715 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:03:36.715 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:03:36.715 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:03:36.715 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:03:36.715 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:03:36.715 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:03:36.715 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:03:36.715 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:03:36.715 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:03:36.715 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:03:36.715 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:03:36.715 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:03:36.715 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:03:36.974 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:03:36.974 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:03:36.974 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:03:36.974 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:03:36.974 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:03:36.974 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:03:36.974 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:03:36.974 [139/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:03:36.974 [140/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:03:36.974 [141/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:03:36.974 [142/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:03:36.974 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:03:36.974 [144/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:03:36.974 [145/203] Linking target lib/libxnvme.so 00:03:36.974 [146/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:03:36.974 [147/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:03:36.974 [148/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:03:36.974 [149/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:03:37.234 [150/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:03:37.234 [151/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:03:37.234 [152/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:03:37.234 [153/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:03:37.234 [154/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:03:37.234 [155/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:03:37.234 [156/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:03:37.234 [157/203] Compiling C object tools/xdd.p/xdd.c.o 00:03:37.234 [158/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:03:37.234 [159/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:03:37.234 [160/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:03:37.234 [161/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:03:37.234 [162/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:03:37.234 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:03:37.495 [164/203] Compiling C object tools/lblk.p/lblk.c.o 00:03:37.495 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:03:37.495 [166/203] Compiling C object tools/zoned.p/zoned.c.o 00:03:37.495 [167/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:03:37.495 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:03:37.495 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:03:37.495 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:03:37.495 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:03:37.495 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:03:37.495 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:03:37.495 [174/203] Linking static target lib/libxnvme.a 00:03:37.754 [175/203] Linking target tests/xnvme_tests_cli 00:03:37.754 [176/203] Linking target tests/xnvme_tests_lblk 00:03:37.754 [177/203] Linking target tests/xnvme_tests_buf 00:03:37.754 [178/203] Linking target tests/xnvme_tests_ioworker 00:03:37.754 [179/203] Linking target tests/xnvme_tests_async_intf 00:03:37.754 [180/203] Linking target tests/xnvme_tests_scc 00:03:37.754 [181/203] Linking target tests/xnvme_tests_xnvme_cli 00:03:37.754 [182/203] Linking target tests/xnvme_tests_enum 00:03:37.754 [183/203] Linking target tests/xnvme_tests_znd_append 00:03:37.754 [184/203] Linking target tests/xnvme_tests_xnvme_file 00:03:37.754 [185/203] Linking target tests/xnvme_tests_znd_explicit_open 00:03:37.754 [186/203] Linking target tests/xnvme_tests_znd_state 00:03:37.754 [187/203] Linking target tests/xnvme_tests_kvs 00:03:37.754 [188/203] Linking target tests/xnvme_tests_znd_zrwa 00:03:37.755 [189/203] Linking target tests/xnvme_tests_map 00:03:37.755 [190/203] Linking target tools/kvs 00:03:37.755 [191/203] Linking target tools/xdd 00:03:37.755 [192/203] Linking target tools/xnvme 00:03:37.755 [193/203] Linking target tools/zoned 00:03:37.755 [194/203] Linking target tools/xnvme_file 00:03:37.755 [195/203] Linking target tools/lblk 00:03:37.755 [196/203] Linking target examples/xnvme_hello 00:03:37.755 [197/203] Linking target examples/xnvme_dev 00:03:37.755 [198/203] Linking target examples/xnvme_io_async 00:03:37.755 [199/203] Linking target examples/xnvme_enum 00:03:37.755 [200/203] Linking target examples/xnvme_single_sync 00:03:37.755 [201/203] Linking target examples/xnvme_single_async 00:03:37.755 [202/203] Linking target examples/zoned_io_async 00:03:37.755 [203/203] Linking target examples/zoned_io_sync 00:03:37.755 INFO: autodetecting backend as ninja 00:03:37.755 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:37.755 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:55.853 CC lib/ut_mock/mock.o 00:03:55.853 CC lib/log/log_flags.o 00:03:55.853 CC lib/log/log_deprecated.o 00:03:55.853 CC lib/log/log.o 00:03:55.853 CC lib/ut/ut.o 00:03:55.853 LIB libspdk_ut_mock.a 00:03:55.853 LIB libspdk_log.a 00:03:55.853 SO libspdk_ut_mock.so.6.0 00:03:55.853 LIB libspdk_ut.a 00:03:55.853 SO libspdk_log.so.7.0 00:03:55.853 SO libspdk_ut.so.2.0 00:03:55.853 SYMLINK libspdk_ut_mock.so 00:03:55.853 SYMLINK libspdk_log.so 00:03:55.853 SYMLINK libspdk_ut.so 00:03:55.853 CC lib/dma/dma.o 00:03:55.853 CXX lib/trace_parser/trace.o 00:03:55.853 CC lib/util/cpuset.o 00:03:55.853 CC lib/util/bit_array.o 00:03:55.853 CC lib/util/crc16.o 00:03:55.853 CC lib/util/base64.o 00:03:55.853 CC lib/util/crc32.o 00:03:55.853 CC lib/util/crc32c.o 00:03:55.853 CC lib/ioat/ioat.o 00:03:55.853 CC lib/vfio_user/host/vfio_user_pci.o 00:03:55.853 CC lib/vfio_user/host/vfio_user.o 00:03:55.853 CC lib/util/crc32_ieee.o 00:03:55.853 CC lib/util/crc64.o 00:03:55.853 CC lib/util/dif.o 00:03:55.853 LIB libspdk_dma.a 00:03:55.853 CC lib/util/fd.o 00:03:55.853 SO libspdk_dma.so.4.0 00:03:55.853 CC lib/util/file.o 00:03:55.853 SYMLINK libspdk_dma.so 00:03:55.853 CC lib/util/hexlify.o 00:03:55.853 CC lib/util/iov.o 00:03:55.853 CC lib/util/math.o 00:03:55.853 LIB libspdk_ioat.a 00:03:55.853 CC lib/util/pipe.o 00:03:55.853 SO libspdk_ioat.so.7.0 00:03:55.853 LIB libspdk_vfio_user.a 00:03:55.853 CC lib/util/strerror_tls.o 00:03:55.853 SO libspdk_vfio_user.so.5.0 00:03:55.853 SYMLINK libspdk_ioat.so 00:03:55.853 CC lib/util/string.o 00:03:55.853 CC lib/util/uuid.o 00:03:55.853 CC lib/util/fd_group.o 00:03:55.853 SYMLINK libspdk_vfio_user.so 00:03:55.853 CC lib/util/xor.o 00:03:55.853 CC lib/util/zipf.o 00:03:55.853 LIB libspdk_util.a 00:03:55.853 SO libspdk_util.so.9.0 00:03:56.110 LIB libspdk_trace_parser.a 00:03:56.110 SO libspdk_trace_parser.so.5.0 00:03:56.110 SYMLINK libspdk_util.so 00:03:56.369 SYMLINK libspdk_trace_parser.so 00:03:56.369 CC lib/idxd/idxd_kernel.o 00:03:56.369 CC lib/idxd/idxd_user.o 00:03:56.369 CC lib/idxd/idxd.o 00:03:56.369 CC lib/rdma/common.o 00:03:56.369 CC lib/rdma/rdma_verbs.o 00:03:56.369 CC lib/json/json_parse.o 00:03:56.369 CC lib/json/json_util.o 00:03:56.369 CC lib/conf/conf.o 00:03:56.369 CC lib/vmd/vmd.o 00:03:56.369 CC lib/env_dpdk/env.o 00:03:56.627 CC lib/json/json_write.o 00:03:56.627 CC lib/vmd/led.o 00:03:56.627 LIB libspdk_conf.a 00:03:56.627 CC lib/env_dpdk/memory.o 00:03:56.627 CC lib/env_dpdk/pci.o 00:03:56.627 CC lib/env_dpdk/init.o 00:03:56.627 SO libspdk_conf.so.6.0 00:03:56.627 LIB libspdk_rdma.a 00:03:56.627 SYMLINK libspdk_conf.so 00:03:56.627 CC lib/env_dpdk/threads.o 00:03:56.627 SO libspdk_rdma.so.6.0 00:03:56.627 CC lib/env_dpdk/pci_ioat.o 00:03:56.886 SYMLINK libspdk_rdma.so 00:03:56.886 CC lib/env_dpdk/pci_virtio.o 00:03:56.886 LIB libspdk_json.a 00:03:56.886 CC lib/env_dpdk/pci_vmd.o 00:03:56.886 SO libspdk_json.so.6.0 00:03:56.886 CC lib/env_dpdk/pci_idxd.o 00:03:56.886 CC lib/env_dpdk/pci_event.o 00:03:56.886 SYMLINK libspdk_json.so 00:03:56.886 CC lib/env_dpdk/sigbus_handler.o 00:03:56.886 CC lib/env_dpdk/pci_dpdk.o 00:03:57.145 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:57.145 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:57.145 LIB libspdk_idxd.a 00:03:57.145 SO libspdk_idxd.so.12.0 00:03:57.145 LIB libspdk_vmd.a 00:03:57.145 CC lib/jsonrpc/jsonrpc_server.o 00:03:57.145 CC lib/jsonrpc/jsonrpc_client.o 00:03:57.145 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:57.145 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:57.145 SYMLINK libspdk_idxd.so 00:03:57.145 SO libspdk_vmd.so.6.0 00:03:57.404 SYMLINK libspdk_vmd.so 00:03:57.404 LIB libspdk_jsonrpc.a 00:03:57.663 SO libspdk_jsonrpc.so.6.0 00:03:57.663 SYMLINK libspdk_jsonrpc.so 00:03:58.229 CC lib/rpc/rpc.o 00:03:58.229 LIB libspdk_rpc.a 00:03:58.487 SO libspdk_rpc.so.6.0 00:03:58.487 LIB libspdk_env_dpdk.a 00:03:58.487 SYMLINK libspdk_rpc.so 00:03:58.487 SO libspdk_env_dpdk.so.14.0 00:03:58.487 SYMLINK libspdk_env_dpdk.so 00:03:58.745 CC lib/trace/trace.o 00:03:58.745 CC lib/trace/trace_flags.o 00:03:58.745 CC lib/trace/trace_rpc.o 00:03:58.745 CC lib/notify/notify.o 00:03:58.745 CC lib/notify/notify_rpc.o 00:03:58.745 CC lib/keyring/keyring.o 00:03:58.745 CC lib/keyring/keyring_rpc.o 00:03:59.004 LIB libspdk_notify.a 00:03:59.004 SO libspdk_notify.so.6.0 00:03:59.004 LIB libspdk_trace.a 00:03:59.004 LIB libspdk_keyring.a 00:03:59.004 SYMLINK libspdk_notify.so 00:03:59.004 SO libspdk_trace.so.10.0 00:03:59.262 SO libspdk_keyring.so.1.0 00:03:59.262 SYMLINK libspdk_trace.so 00:03:59.262 SYMLINK libspdk_keyring.so 00:03:59.521 CC lib/sock/sock.o 00:03:59.521 CC lib/sock/sock_rpc.o 00:03:59.521 CC lib/thread/thread.o 00:03:59.521 CC lib/thread/iobuf.o 00:04:00.109 LIB libspdk_sock.a 00:04:00.109 SO libspdk_sock.so.9.0 00:04:00.109 SYMLINK libspdk_sock.so 00:04:00.683 CC lib/nvme/nvme_ctrlr.o 00:04:00.683 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:00.683 CC lib/nvme/nvme_fabric.o 00:04:00.683 CC lib/nvme/nvme_ns_cmd.o 00:04:00.683 CC lib/nvme/nvme_ns.o 00:04:00.683 CC lib/nvme/nvme_pcie.o 00:04:00.683 CC lib/nvme/nvme_pcie_common.o 00:04:00.683 CC lib/nvme/nvme.o 00:04:00.683 CC lib/nvme/nvme_qpair.o 00:04:01.249 CC lib/nvme/nvme_quirks.o 00:04:01.249 CC lib/nvme/nvme_transport.o 00:04:01.507 LIB libspdk_thread.a 00:04:01.507 SO libspdk_thread.so.10.0 00:04:01.507 CC lib/nvme/nvme_discovery.o 00:04:01.507 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:01.507 SYMLINK libspdk_thread.so 00:04:01.507 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:01.507 CC lib/nvme/nvme_tcp.o 00:04:01.507 CC lib/nvme/nvme_opal.o 00:04:01.765 CC lib/nvme/nvme_io_msg.o 00:04:02.023 CC lib/nvme/nvme_poll_group.o 00:04:02.023 CC lib/nvme/nvme_zns.o 00:04:02.023 CC lib/nvme/nvme_stubs.o 00:04:02.281 CC lib/nvme/nvme_auth.o 00:04:02.281 CC lib/nvme/nvme_cuse.o 00:04:02.281 CC lib/nvme/nvme_rdma.o 00:04:02.539 CC lib/accel/accel.o 00:04:02.539 CC lib/blob/blobstore.o 00:04:02.797 CC lib/blob/request.o 00:04:02.797 CC lib/blob/zeroes.o 00:04:02.797 CC lib/blob/blob_bs_dev.o 00:04:03.053 CC lib/accel/accel_rpc.o 00:04:03.310 CC lib/init/json_config.o 00:04:03.310 CC lib/init/subsystem.o 00:04:03.310 CC lib/init/subsystem_rpc.o 00:04:03.310 CC lib/virtio/virtio.o 00:04:03.310 CC lib/init/rpc.o 00:04:03.310 CC lib/virtio/virtio_vhost_user.o 00:04:03.568 CC lib/virtio/virtio_vfio_user.o 00:04:03.568 CC lib/accel/accel_sw.o 00:04:03.568 LIB libspdk_init.a 00:04:03.568 SO libspdk_init.so.5.0 00:04:03.827 CC lib/virtio/virtio_pci.o 00:04:03.827 SYMLINK libspdk_init.so 00:04:03.827 LIB libspdk_accel.a 00:04:03.827 LIB libspdk_nvme.a 00:04:04.084 SO libspdk_accel.so.15.0 00:04:04.084 CC lib/event/reactor.o 00:04:04.084 CC lib/event/log_rpc.o 00:04:04.084 CC lib/event/app.o 00:04:04.084 CC lib/event/app_rpc.o 00:04:04.084 CC lib/event/scheduler_static.o 00:04:04.084 LIB libspdk_virtio.a 00:04:04.084 SYMLINK libspdk_accel.so 00:04:04.084 SO libspdk_virtio.so.7.0 00:04:04.084 SO libspdk_nvme.so.13.0 00:04:04.341 SYMLINK libspdk_virtio.so 00:04:04.341 CC lib/bdev/bdev.o 00:04:04.341 CC lib/bdev/part.o 00:04:04.341 CC lib/bdev/bdev_rpc.o 00:04:04.341 CC lib/bdev/scsi_nvme.o 00:04:04.341 CC lib/bdev/bdev_zone.o 00:04:04.598 SYMLINK libspdk_nvme.so 00:04:04.598 LIB libspdk_event.a 00:04:04.598 SO libspdk_event.so.13.0 00:04:04.856 SYMLINK libspdk_event.so 00:04:06.772 LIB libspdk_blob.a 00:04:06.772 SO libspdk_blob.so.11.0 00:04:07.031 SYMLINK libspdk_blob.so 00:04:07.291 CC lib/lvol/lvol.o 00:04:07.291 CC lib/blobfs/blobfs.o 00:04:07.291 CC lib/blobfs/tree.o 00:04:07.552 LIB libspdk_bdev.a 00:04:07.552 SO libspdk_bdev.so.15.0 00:04:07.811 SYMLINK libspdk_bdev.so 00:04:08.071 CC lib/ftl/ftl_core.o 00:04:08.071 CC lib/ftl/ftl_init.o 00:04:08.071 CC lib/ftl/ftl_layout.o 00:04:08.071 CC lib/ftl/ftl_debug.o 00:04:08.071 CC lib/ublk/ublk.o 00:04:08.071 CC lib/nbd/nbd.o 00:04:08.071 CC lib/scsi/dev.o 00:04:08.071 CC lib/nvmf/ctrlr.o 00:04:08.071 CC lib/ftl/ftl_io.o 00:04:08.330 CC lib/scsi/lun.o 00:04:08.330 CC lib/scsi/port.o 00:04:08.330 LIB libspdk_blobfs.a 00:04:08.330 CC lib/scsi/scsi.o 00:04:08.330 SO libspdk_blobfs.so.10.0 00:04:08.330 SYMLINK libspdk_blobfs.so 00:04:08.330 CC lib/scsi/scsi_bdev.o 00:04:08.330 CC lib/ftl/ftl_sb.o 00:04:08.330 LIB libspdk_lvol.a 00:04:08.588 CC lib/ftl/ftl_l2p.o 00:04:08.588 SO libspdk_lvol.so.10.0 00:04:08.588 CC lib/scsi/scsi_pr.o 00:04:08.588 CC lib/ublk/ublk_rpc.o 00:04:08.588 CC lib/nbd/nbd_rpc.o 00:04:08.588 SYMLINK libspdk_lvol.so 00:04:08.588 CC lib/ftl/ftl_l2p_flat.o 00:04:08.588 CC lib/ftl/ftl_nv_cache.o 00:04:08.589 CC lib/ftl/ftl_band.o 00:04:08.589 CC lib/ftl/ftl_band_ops.o 00:04:08.589 CC lib/ftl/ftl_writer.o 00:04:08.847 LIB libspdk_nbd.a 00:04:08.847 LIB libspdk_ublk.a 00:04:08.847 SO libspdk_nbd.so.7.0 00:04:08.847 SO libspdk_ublk.so.3.0 00:04:08.847 CC lib/scsi/scsi_rpc.o 00:04:08.847 SYMLINK libspdk_nbd.so 00:04:08.847 SYMLINK libspdk_ublk.so 00:04:08.847 CC lib/scsi/task.o 00:04:08.847 CC lib/nvmf/ctrlr_discovery.o 00:04:08.847 CC lib/ftl/ftl_rq.o 00:04:08.847 CC lib/nvmf/ctrlr_bdev.o 00:04:09.107 CC lib/nvmf/subsystem.o 00:04:09.107 CC lib/ftl/ftl_reloc.o 00:04:09.107 CC lib/ftl/ftl_l2p_cache.o 00:04:09.107 CC lib/ftl/ftl_p2l.o 00:04:09.107 CC lib/nvmf/nvmf.o 00:04:09.107 LIB libspdk_scsi.a 00:04:09.107 SO libspdk_scsi.so.9.0 00:04:09.367 SYMLINK libspdk_scsi.so 00:04:09.367 CC lib/nvmf/nvmf_rpc.o 00:04:09.367 CC lib/ftl/mngt/ftl_mngt.o 00:04:09.367 CC lib/nvmf/transport.o 00:04:09.367 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:09.626 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:09.626 CC lib/nvmf/tcp.o 00:04:09.885 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:09.885 CC lib/nvmf/stubs.o 00:04:09.885 CC lib/nvmf/mdns_server.o 00:04:09.885 CC lib/iscsi/conn.o 00:04:09.885 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:10.143 CC lib/nvmf/rdma.o 00:04:10.143 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:10.143 CC lib/nvmf/auth.o 00:04:10.143 CC lib/iscsi/init_grp.o 00:04:10.401 CC lib/iscsi/iscsi.o 00:04:10.401 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:10.401 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:10.401 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:10.659 CC lib/iscsi/md5.o 00:04:10.659 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:10.659 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:10.659 CC lib/vhost/vhost.o 00:04:10.659 CC lib/vhost/vhost_rpc.o 00:04:10.659 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:10.659 CC lib/iscsi/param.o 00:04:10.659 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:10.918 CC lib/iscsi/portal_grp.o 00:04:10.918 CC lib/iscsi/tgt_node.o 00:04:11.177 CC lib/vhost/vhost_scsi.o 00:04:11.177 CC lib/vhost/vhost_blk.o 00:04:11.177 CC lib/iscsi/iscsi_subsystem.o 00:04:11.177 CC lib/ftl/utils/ftl_conf.o 00:04:11.177 CC lib/ftl/utils/ftl_md.o 00:04:11.177 CC lib/ftl/utils/ftl_mempool.o 00:04:11.436 CC lib/vhost/rte_vhost_user.o 00:04:11.436 CC lib/iscsi/iscsi_rpc.o 00:04:11.436 CC lib/ftl/utils/ftl_bitmap.o 00:04:11.694 CC lib/ftl/utils/ftl_property.o 00:04:11.694 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:11.694 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:11.694 CC lib/iscsi/task.o 00:04:11.954 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:11.954 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:11.954 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:11.954 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:11.954 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:11.954 LIB libspdk_iscsi.a 00:04:11.954 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:11.954 SO libspdk_iscsi.so.8.0 00:04:12.213 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:12.213 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:12.213 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:12.213 CC lib/ftl/base/ftl_base_dev.o 00:04:12.213 CC lib/ftl/base/ftl_base_bdev.o 00:04:12.213 CC lib/ftl/ftl_trace.o 00:04:12.213 SYMLINK libspdk_iscsi.so 00:04:12.472 LIB libspdk_ftl.a 00:04:12.472 LIB libspdk_vhost.a 00:04:12.732 SO libspdk_vhost.so.8.0 00:04:12.732 SO libspdk_ftl.so.9.0 00:04:12.732 SYMLINK libspdk_vhost.so 00:04:12.732 LIB libspdk_nvmf.a 00:04:12.992 SO libspdk_nvmf.so.18.0 00:04:12.992 SYMLINK libspdk_ftl.so 00:04:13.252 SYMLINK libspdk_nvmf.so 00:04:13.512 CC module/env_dpdk/env_dpdk_rpc.o 00:04:13.771 CC module/accel/dsa/accel_dsa.o 00:04:13.771 CC module/blob/bdev/blob_bdev.o 00:04:13.771 CC module/keyring/file/keyring.o 00:04:13.771 CC module/accel/iaa/accel_iaa.o 00:04:13.771 CC module/sock/posix/posix.o 00:04:13.771 CC module/accel/error/accel_error.o 00:04:13.771 CC module/accel/ioat/accel_ioat.o 00:04:13.771 CC module/keyring/linux/keyring.o 00:04:13.771 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:13.771 LIB libspdk_env_dpdk_rpc.a 00:04:13.771 SO libspdk_env_dpdk_rpc.so.6.0 00:04:13.771 SYMLINK libspdk_env_dpdk_rpc.so 00:04:13.771 CC module/keyring/linux/keyring_rpc.o 00:04:13.771 CC module/keyring/file/keyring_rpc.o 00:04:13.771 CC module/accel/ioat/accel_ioat_rpc.o 00:04:13.772 CC module/accel/error/accel_error_rpc.o 00:04:13.772 LIB libspdk_scheduler_dynamic.a 00:04:13.772 CC module/accel/iaa/accel_iaa_rpc.o 00:04:13.772 SO libspdk_scheduler_dynamic.so.4.0 00:04:14.032 CC module/accel/dsa/accel_dsa_rpc.o 00:04:14.032 LIB libspdk_keyring_linux.a 00:04:14.032 SYMLINK libspdk_scheduler_dynamic.so 00:04:14.032 LIB libspdk_blob_bdev.a 00:04:14.032 LIB libspdk_accel_ioat.a 00:04:14.032 SO libspdk_keyring_linux.so.1.0 00:04:14.032 LIB libspdk_keyring_file.a 00:04:14.032 LIB libspdk_accel_error.a 00:04:14.032 SO libspdk_blob_bdev.so.11.0 00:04:14.032 LIB libspdk_accel_iaa.a 00:04:14.032 SO libspdk_accel_ioat.so.6.0 00:04:14.032 SO libspdk_keyring_file.so.1.0 00:04:14.032 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:14.032 SO libspdk_accel_error.so.2.0 00:04:14.032 SO libspdk_accel_iaa.so.3.0 00:04:14.032 SYMLINK libspdk_keyring_linux.so 00:04:14.032 LIB libspdk_accel_dsa.a 00:04:14.032 SYMLINK libspdk_blob_bdev.so 00:04:14.032 SYMLINK libspdk_accel_ioat.so 00:04:14.032 SYMLINK libspdk_keyring_file.so 00:04:14.032 SYMLINK libspdk_accel_error.so 00:04:14.032 SYMLINK libspdk_accel_iaa.so 00:04:14.032 SO libspdk_accel_dsa.so.5.0 00:04:14.032 CC module/scheduler/gscheduler/gscheduler.o 00:04:14.299 SYMLINK libspdk_accel_dsa.so 00:04:14.299 LIB libspdk_scheduler_dpdk_governor.a 00:04:14.299 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:14.299 LIB libspdk_scheduler_gscheduler.a 00:04:14.299 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:14.299 SO libspdk_scheduler_gscheduler.so.4.0 00:04:14.299 CC module/bdev/gpt/gpt.o 00:04:14.299 CC module/bdev/null/bdev_null.o 00:04:14.299 CC module/bdev/error/vbdev_error.o 00:04:14.299 CC module/bdev/lvol/vbdev_lvol.o 00:04:14.299 CC module/bdev/malloc/bdev_malloc.o 00:04:14.299 SYMLINK libspdk_scheduler_gscheduler.so 00:04:14.299 CC module/blobfs/bdev/blobfs_bdev.o 00:04:14.299 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:14.299 CC module/bdev/delay/vbdev_delay.o 00:04:14.564 CC module/bdev/nvme/bdev_nvme.o 00:04:14.564 LIB libspdk_sock_posix.a 00:04:14.564 CC module/bdev/gpt/vbdev_gpt.o 00:04:14.564 SO libspdk_sock_posix.so.6.0 00:04:14.564 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:14.564 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:14.564 SYMLINK libspdk_sock_posix.so 00:04:14.564 CC module/bdev/nvme/nvme_rpc.o 00:04:14.564 CC module/bdev/null/bdev_null_rpc.o 00:04:14.564 CC module/bdev/error/vbdev_error_rpc.o 00:04:14.825 LIB libspdk_blobfs_bdev.a 00:04:14.825 SO libspdk_blobfs_bdev.so.6.0 00:04:14.825 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:14.825 LIB libspdk_bdev_gpt.a 00:04:14.825 SYMLINK libspdk_blobfs_bdev.so 00:04:14.825 CC module/bdev/nvme/bdev_mdns_client.o 00:04:14.825 LIB libspdk_bdev_malloc.a 00:04:14.825 SO libspdk_bdev_gpt.so.6.0 00:04:14.825 LIB libspdk_bdev_null.a 00:04:14.825 SO libspdk_bdev_malloc.so.6.0 00:04:14.825 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:14.825 LIB libspdk_bdev_error.a 00:04:14.825 SO libspdk_bdev_null.so.6.0 00:04:14.825 SYMLINK libspdk_bdev_gpt.so 00:04:14.825 SO libspdk_bdev_error.so.6.0 00:04:14.825 LIB libspdk_bdev_delay.a 00:04:15.083 SYMLINK libspdk_bdev_malloc.so 00:04:15.083 SYMLINK libspdk_bdev_null.so 00:04:15.083 SO libspdk_bdev_delay.so.6.0 00:04:15.083 SYMLINK libspdk_bdev_error.so 00:04:15.083 CC module/bdev/passthru/vbdev_passthru.o 00:04:15.083 SYMLINK libspdk_bdev_delay.so 00:04:15.083 CC module/bdev/raid/bdev_raid.o 00:04:15.083 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:15.083 CC module/bdev/split/vbdev_split.o 00:04:15.083 CC module/bdev/xnvme/bdev_xnvme.o 00:04:15.083 CC module/bdev/aio/bdev_aio.o 00:04:15.341 CC module/bdev/ftl/bdev_ftl.o 00:04:15.341 CC module/bdev/nvme/vbdev_opal.o 00:04:15.341 LIB libspdk_bdev_lvol.a 00:04:15.341 CC module/bdev/split/vbdev_split_rpc.o 00:04:15.341 SO libspdk_bdev_lvol.so.6.0 00:04:15.341 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:15.341 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:04:15.341 SYMLINK libspdk_bdev_lvol.so 00:04:15.341 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:15.599 CC module/bdev/aio/bdev_aio_rpc.o 00:04:15.599 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:15.599 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:15.599 LIB libspdk_bdev_passthru.a 00:04:15.599 LIB libspdk_bdev_split.a 00:04:15.599 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:15.599 LIB libspdk_bdev_xnvme.a 00:04:15.599 SO libspdk_bdev_passthru.so.6.0 00:04:15.599 SO libspdk_bdev_split.so.6.0 00:04:15.599 SO libspdk_bdev_xnvme.so.3.0 00:04:15.599 LIB libspdk_bdev_aio.a 00:04:15.599 SYMLINK libspdk_bdev_split.so 00:04:15.599 SYMLINK libspdk_bdev_xnvme.so 00:04:15.599 CC module/bdev/raid/bdev_raid_rpc.o 00:04:15.599 CC module/bdev/raid/bdev_raid_sb.o 00:04:15.599 SYMLINK libspdk_bdev_passthru.so 00:04:15.599 SO libspdk_bdev_aio.so.6.0 00:04:15.599 LIB libspdk_bdev_zone_block.a 00:04:15.858 LIB libspdk_bdev_ftl.a 00:04:15.858 SO libspdk_bdev_zone_block.so.6.0 00:04:15.858 CC module/bdev/raid/raid0.o 00:04:15.858 SYMLINK libspdk_bdev_aio.so 00:04:15.858 CC module/bdev/raid/raid1.o 00:04:15.858 SO libspdk_bdev_ftl.so.6.0 00:04:15.858 SYMLINK libspdk_bdev_zone_block.so 00:04:15.858 CC module/bdev/raid/concat.o 00:04:15.858 SYMLINK libspdk_bdev_ftl.so 00:04:15.858 CC module/bdev/iscsi/bdev_iscsi.o 00:04:15.858 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:15.858 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:16.115 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:16.115 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:16.115 LIB libspdk_bdev_raid.a 00:04:16.374 SO libspdk_bdev_raid.so.6.0 00:04:16.374 LIB libspdk_bdev_iscsi.a 00:04:16.374 SYMLINK libspdk_bdev_raid.so 00:04:16.374 SO libspdk_bdev_iscsi.so.6.0 00:04:16.632 SYMLINK libspdk_bdev_iscsi.so 00:04:16.632 LIB libspdk_bdev_virtio.a 00:04:16.632 SO libspdk_bdev_virtio.so.6.0 00:04:16.632 SYMLINK libspdk_bdev_virtio.so 00:04:17.199 LIB libspdk_bdev_nvme.a 00:04:17.199 SO libspdk_bdev_nvme.so.7.0 00:04:17.457 SYMLINK libspdk_bdev_nvme.so 00:04:18.021 CC module/event/subsystems/vmd/vmd.o 00:04:18.021 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:18.021 CC module/event/subsystems/keyring/keyring.o 00:04:18.021 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:18.022 CC module/event/subsystems/sock/sock.o 00:04:18.022 CC module/event/subsystems/iobuf/iobuf.o 00:04:18.022 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:18.022 CC module/event/subsystems/scheduler/scheduler.o 00:04:18.022 LIB libspdk_event_keyring.a 00:04:18.022 LIB libspdk_event_vhost_blk.a 00:04:18.022 SO libspdk_event_keyring.so.1.0 00:04:18.280 LIB libspdk_event_iobuf.a 00:04:18.280 LIB libspdk_event_sock.a 00:04:18.280 LIB libspdk_event_vmd.a 00:04:18.280 SO libspdk_event_vhost_blk.so.3.0 00:04:18.280 SO libspdk_event_sock.so.5.0 00:04:18.280 SO libspdk_event_iobuf.so.3.0 00:04:18.280 SO libspdk_event_vmd.so.6.0 00:04:18.280 LIB libspdk_event_scheduler.a 00:04:18.280 SYMLINK libspdk_event_keyring.so 00:04:18.280 SYMLINK libspdk_event_vhost_blk.so 00:04:18.280 SO libspdk_event_scheduler.so.4.0 00:04:18.280 SYMLINK libspdk_event_sock.so 00:04:18.280 SYMLINK libspdk_event_iobuf.so 00:04:18.280 SYMLINK libspdk_event_vmd.so 00:04:18.280 SYMLINK libspdk_event_scheduler.so 00:04:18.539 CC module/event/subsystems/accel/accel.o 00:04:18.797 LIB libspdk_event_accel.a 00:04:18.797 SO libspdk_event_accel.so.6.0 00:04:19.056 SYMLINK libspdk_event_accel.so 00:04:19.315 CC module/event/subsystems/bdev/bdev.o 00:04:19.573 LIB libspdk_event_bdev.a 00:04:19.573 SO libspdk_event_bdev.so.6.0 00:04:19.573 SYMLINK libspdk_event_bdev.so 00:04:19.832 CC module/event/subsystems/ublk/ublk.o 00:04:19.832 CC module/event/subsystems/scsi/scsi.o 00:04:19.832 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:19.832 CC module/event/subsystems/nbd/nbd.o 00:04:19.832 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:20.092 LIB libspdk_event_ublk.a 00:04:20.092 LIB libspdk_event_nbd.a 00:04:20.092 LIB libspdk_event_scsi.a 00:04:20.092 SO libspdk_event_ublk.so.3.0 00:04:20.092 SO libspdk_event_nbd.so.6.0 00:04:20.092 SO libspdk_event_scsi.so.6.0 00:04:20.092 SYMLINK libspdk_event_ublk.so 00:04:20.092 SYMLINK libspdk_event_scsi.so 00:04:20.092 LIB libspdk_event_nvmf.a 00:04:20.351 SYMLINK libspdk_event_nbd.so 00:04:20.351 SO libspdk_event_nvmf.so.6.0 00:04:20.351 SYMLINK libspdk_event_nvmf.so 00:04:20.610 CC module/event/subsystems/iscsi/iscsi.o 00:04:20.610 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:20.610 LIB libspdk_event_iscsi.a 00:04:20.870 SO libspdk_event_iscsi.so.6.0 00:04:20.870 LIB libspdk_event_vhost_scsi.a 00:04:20.870 SO libspdk_event_vhost_scsi.so.3.0 00:04:20.870 SYMLINK libspdk_event_iscsi.so 00:04:20.870 SYMLINK libspdk_event_vhost_scsi.so 00:04:21.129 SO libspdk.so.6.0 00:04:21.129 SYMLINK libspdk.so 00:04:21.388 TEST_HEADER include/spdk/accel.h 00:04:21.388 TEST_HEADER include/spdk/accel_module.h 00:04:21.388 TEST_HEADER include/spdk/assert.h 00:04:21.388 TEST_HEADER include/spdk/barrier.h 00:04:21.388 CXX app/trace/trace.o 00:04:21.388 TEST_HEADER include/spdk/base64.h 00:04:21.388 TEST_HEADER include/spdk/bdev.h 00:04:21.388 TEST_HEADER include/spdk/bdev_module.h 00:04:21.388 TEST_HEADER include/spdk/bdev_zone.h 00:04:21.388 TEST_HEADER include/spdk/bit_array.h 00:04:21.388 TEST_HEADER include/spdk/bit_pool.h 00:04:21.388 TEST_HEADER include/spdk/blob_bdev.h 00:04:21.388 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:21.388 TEST_HEADER include/spdk/blobfs.h 00:04:21.388 TEST_HEADER include/spdk/blob.h 00:04:21.388 TEST_HEADER include/spdk/conf.h 00:04:21.388 TEST_HEADER include/spdk/config.h 00:04:21.388 TEST_HEADER include/spdk/cpuset.h 00:04:21.388 TEST_HEADER include/spdk/crc16.h 00:04:21.388 TEST_HEADER include/spdk/crc32.h 00:04:21.388 TEST_HEADER include/spdk/crc64.h 00:04:21.388 TEST_HEADER include/spdk/dif.h 00:04:21.388 TEST_HEADER include/spdk/dma.h 00:04:21.388 TEST_HEADER include/spdk/endian.h 00:04:21.388 TEST_HEADER include/spdk/env_dpdk.h 00:04:21.388 TEST_HEADER include/spdk/env.h 00:04:21.388 TEST_HEADER include/spdk/event.h 00:04:21.388 TEST_HEADER include/spdk/fd_group.h 00:04:21.388 TEST_HEADER include/spdk/fd.h 00:04:21.388 TEST_HEADER include/spdk/file.h 00:04:21.388 TEST_HEADER include/spdk/ftl.h 00:04:21.388 TEST_HEADER include/spdk/gpt_spec.h 00:04:21.388 TEST_HEADER include/spdk/hexlify.h 00:04:21.388 TEST_HEADER include/spdk/histogram_data.h 00:04:21.388 TEST_HEADER include/spdk/idxd.h 00:04:21.388 CC examples/accel/perf/accel_perf.o 00:04:21.388 TEST_HEADER include/spdk/idxd_spec.h 00:04:21.388 TEST_HEADER include/spdk/init.h 00:04:21.388 CC test/event/event_perf/event_perf.o 00:04:21.388 TEST_HEADER include/spdk/ioat.h 00:04:21.388 TEST_HEADER include/spdk/ioat_spec.h 00:04:21.388 CC test/bdev/bdevio/bdevio.o 00:04:21.388 CC test/accel/dif/dif.o 00:04:21.388 TEST_HEADER include/spdk/iscsi_spec.h 00:04:21.388 TEST_HEADER include/spdk/json.h 00:04:21.388 TEST_HEADER include/spdk/jsonrpc.h 00:04:21.388 CC test/blobfs/mkfs/mkfs.o 00:04:21.388 TEST_HEADER include/spdk/keyring.h 00:04:21.388 CC test/dma/test_dma/test_dma.o 00:04:21.388 TEST_HEADER include/spdk/keyring_module.h 00:04:21.648 CC test/app/bdev_svc/bdev_svc.o 00:04:21.648 TEST_HEADER include/spdk/likely.h 00:04:21.648 TEST_HEADER include/spdk/log.h 00:04:21.648 TEST_HEADER include/spdk/lvol.h 00:04:21.648 TEST_HEADER include/spdk/memory.h 00:04:21.648 TEST_HEADER include/spdk/mmio.h 00:04:21.648 TEST_HEADER include/spdk/nbd.h 00:04:21.648 TEST_HEADER include/spdk/notify.h 00:04:21.648 TEST_HEADER include/spdk/nvme.h 00:04:21.648 TEST_HEADER include/spdk/nvme_intel.h 00:04:21.648 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:21.648 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:21.648 TEST_HEADER include/spdk/nvme_spec.h 00:04:21.648 CC test/env/mem_callbacks/mem_callbacks.o 00:04:21.648 TEST_HEADER include/spdk/nvme_zns.h 00:04:21.648 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:21.648 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:21.648 TEST_HEADER include/spdk/nvmf.h 00:04:21.648 TEST_HEADER include/spdk/nvmf_spec.h 00:04:21.648 TEST_HEADER include/spdk/nvmf_transport.h 00:04:21.648 TEST_HEADER include/spdk/opal.h 00:04:21.648 TEST_HEADER include/spdk/opal_spec.h 00:04:21.648 TEST_HEADER include/spdk/pci_ids.h 00:04:21.648 TEST_HEADER include/spdk/pipe.h 00:04:21.648 TEST_HEADER include/spdk/queue.h 00:04:21.648 TEST_HEADER include/spdk/reduce.h 00:04:21.648 TEST_HEADER include/spdk/rpc.h 00:04:21.648 TEST_HEADER include/spdk/scheduler.h 00:04:21.648 TEST_HEADER include/spdk/scsi.h 00:04:21.648 TEST_HEADER include/spdk/scsi_spec.h 00:04:21.648 TEST_HEADER include/spdk/sock.h 00:04:21.648 TEST_HEADER include/spdk/stdinc.h 00:04:21.648 TEST_HEADER include/spdk/string.h 00:04:21.648 LINK event_perf 00:04:21.648 TEST_HEADER include/spdk/thread.h 00:04:21.648 TEST_HEADER include/spdk/trace.h 00:04:21.648 TEST_HEADER include/spdk/trace_parser.h 00:04:21.649 TEST_HEADER include/spdk/tree.h 00:04:21.649 TEST_HEADER include/spdk/ublk.h 00:04:21.649 TEST_HEADER include/spdk/util.h 00:04:21.649 TEST_HEADER include/spdk/uuid.h 00:04:21.649 TEST_HEADER include/spdk/version.h 00:04:21.649 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:21.649 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:21.649 TEST_HEADER include/spdk/vhost.h 00:04:21.649 TEST_HEADER include/spdk/vmd.h 00:04:21.649 TEST_HEADER include/spdk/xor.h 00:04:21.649 TEST_HEADER include/spdk/zipf.h 00:04:21.649 CXX test/cpp_headers/accel.o 00:04:21.649 LINK bdev_svc 00:04:21.649 LINK mkfs 00:04:21.908 LINK spdk_trace 00:04:21.908 LINK bdevio 00:04:21.908 CXX test/cpp_headers/accel_module.o 00:04:21.908 CC test/event/reactor/reactor.o 00:04:21.908 LINK accel_perf 00:04:21.908 LINK test_dma 00:04:22.166 CC test/event/reactor_perf/reactor_perf.o 00:04:22.166 CXX test/cpp_headers/assert.o 00:04:22.166 LINK reactor 00:04:22.166 CC app/trace_record/trace_record.o 00:04:22.166 LINK dif 00:04:22.166 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:22.167 LINK mem_callbacks 00:04:22.167 LINK reactor_perf 00:04:22.167 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:22.167 CXX test/cpp_headers/barrier.o 00:04:22.167 CXX test/cpp_headers/base64.o 00:04:22.425 CC test/event/app_repeat/app_repeat.o 00:04:22.425 CC examples/bdev/hello_world/hello_bdev.o 00:04:22.425 CXX test/cpp_headers/bdev.o 00:04:22.425 CXX test/cpp_headers/bdev_module.o 00:04:22.425 CC test/env/vtophys/vtophys.o 00:04:22.425 LINK spdk_trace_record 00:04:22.425 LINK app_repeat 00:04:22.425 CC test/event/scheduler/scheduler.o 00:04:22.425 LINK nvme_fuzz 00:04:22.683 LINK vtophys 00:04:22.683 CXX test/cpp_headers/bdev_zone.o 00:04:22.683 LINK hello_bdev 00:04:22.683 CC test/lvol/esnap/esnap.o 00:04:22.683 CXX test/cpp_headers/bit_array.o 00:04:22.683 CC examples/bdev/bdevperf/bdevperf.o 00:04:22.683 CXX test/cpp_headers/bit_pool.o 00:04:22.683 CC app/nvmf_tgt/nvmf_main.o 00:04:22.942 LINK scheduler 00:04:22.942 CXX test/cpp_headers/blob_bdev.o 00:04:22.942 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:22.942 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:22.942 LINK nvmf_tgt 00:04:22.942 CC app/iscsi_tgt/iscsi_tgt.o 00:04:22.942 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:22.942 LINK env_dpdk_post_init 00:04:22.942 CC test/nvme/aer/aer.o 00:04:22.942 CXX test/cpp_headers/blobfs_bdev.o 00:04:23.201 CC test/rpc_client/rpc_client_test.o 00:04:23.201 LINK iscsi_tgt 00:04:23.201 CXX test/cpp_headers/blobfs.o 00:04:23.201 CXX test/cpp_headers/blob.o 00:04:23.460 CC test/env/memory/memory_ut.o 00:04:23.460 LINK aer 00:04:23.460 CXX test/cpp_headers/conf.o 00:04:23.460 LINK rpc_client_test 00:04:23.717 CC test/nvme/reset/reset.o 00:04:23.717 CXX test/cpp_headers/config.o 00:04:23.717 LINK vhost_fuzz 00:04:23.717 CC app/spdk_tgt/spdk_tgt.o 00:04:23.717 CXX test/cpp_headers/cpuset.o 00:04:23.717 LINK bdevperf 00:04:23.717 CXX test/cpp_headers/crc16.o 00:04:23.717 CC test/env/pci/pci_ut.o 00:04:23.975 CC test/thread/poller_perf/poller_perf.o 00:04:23.975 CC test/app/histogram_perf/histogram_perf.o 00:04:23.975 LINK reset 00:04:23.975 LINK spdk_tgt 00:04:23.975 CXX test/cpp_headers/crc32.o 00:04:23.975 LINK poller_perf 00:04:24.233 CXX test/cpp_headers/crc64.o 00:04:24.233 LINK histogram_perf 00:04:24.233 CC examples/blob/hello_world/hello_blob.o 00:04:24.233 CC test/nvme/sgl/sgl.o 00:04:24.233 LINK pci_ut 00:04:24.233 LINK iscsi_fuzz 00:04:24.233 CC examples/blob/cli/blobcli.o 00:04:24.233 CC app/spdk_lspci/spdk_lspci.o 00:04:24.492 CXX test/cpp_headers/dif.o 00:04:24.492 CC test/app/jsoncat/jsoncat.o 00:04:24.492 LINK hello_blob 00:04:24.492 LINK spdk_lspci 00:04:24.492 LINK sgl 00:04:24.492 LINK jsoncat 00:04:24.492 CXX test/cpp_headers/dma.o 00:04:24.492 CXX test/cpp_headers/endian.o 00:04:24.492 LINK memory_ut 00:04:24.750 CC test/app/stub/stub.o 00:04:24.750 CXX test/cpp_headers/env_dpdk.o 00:04:24.750 CXX test/cpp_headers/env.o 00:04:24.750 CC app/spdk_nvme_perf/perf.o 00:04:24.750 CXX test/cpp_headers/event.o 00:04:24.750 CC test/nvme/e2edp/nvme_dp.o 00:04:24.750 LINK stub 00:04:24.750 LINK blobcli 00:04:25.008 CC examples/ioat/perf/perf.o 00:04:25.008 CXX test/cpp_headers/fd_group.o 00:04:25.008 CC examples/vmd/lsvmd/lsvmd.o 00:04:25.008 CC examples/nvme/hello_world/hello_world.o 00:04:25.008 CC examples/sock/hello_world/hello_sock.o 00:04:25.008 CXX test/cpp_headers/fd.o 00:04:25.008 LINK ioat_perf 00:04:25.265 LINK lsvmd 00:04:25.265 LINK nvme_dp 00:04:25.265 CC examples/nvme/reconnect/reconnect.o 00:04:25.265 CXX test/cpp_headers/file.o 00:04:25.265 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:25.265 LINK hello_world 00:04:25.265 LINK hello_sock 00:04:25.265 CXX test/cpp_headers/ftl.o 00:04:25.265 CC examples/ioat/verify/verify.o 00:04:25.523 CC test/nvme/overhead/overhead.o 00:04:25.523 CC examples/vmd/led/led.o 00:04:25.523 LINK reconnect 00:04:25.523 CXX test/cpp_headers/gpt_spec.o 00:04:25.523 LINK led 00:04:25.523 CC examples/nvmf/nvmf/nvmf.o 00:04:25.523 CC examples/util/zipf/zipf.o 00:04:25.523 LINK verify 00:04:25.523 LINK spdk_nvme_perf 00:04:25.523 CXX test/cpp_headers/hexlify.o 00:04:25.780 LINK nvme_manage 00:04:25.780 LINK overhead 00:04:25.780 LINK zipf 00:04:25.780 CXX test/cpp_headers/histogram_data.o 00:04:25.780 CC examples/nvme/arbitration/arbitration.o 00:04:25.780 CC examples/nvme/hotplug/hotplug.o 00:04:25.780 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:25.780 CC app/spdk_nvme_identify/identify.o 00:04:25.780 LINK nvmf 00:04:26.038 CXX test/cpp_headers/idxd.o 00:04:26.038 CC app/spdk_nvme_discover/discovery_aer.o 00:04:26.038 CC test/nvme/err_injection/err_injection.o 00:04:26.038 CC examples/nvme/abort/abort.o 00:04:26.038 LINK hotplug 00:04:26.038 LINK cmb_copy 00:04:26.038 CXX test/cpp_headers/idxd_spec.o 00:04:26.038 LINK arbitration 00:04:26.295 LINK spdk_nvme_discover 00:04:26.295 LINK err_injection 00:04:26.295 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:26.295 CXX test/cpp_headers/init.o 00:04:26.295 CC app/spdk_top/spdk_top.o 00:04:26.295 CC app/vhost/vhost.o 00:04:26.295 LINK pmr_persistence 00:04:26.295 LINK abort 00:04:26.295 CXX test/cpp_headers/ioat.o 00:04:26.553 CC app/spdk_dd/spdk_dd.o 00:04:26.553 CC test/nvme/startup/startup.o 00:04:26.553 CC app/fio/nvme/fio_plugin.o 00:04:26.553 LINK vhost 00:04:26.553 CXX test/cpp_headers/ioat_spec.o 00:04:26.553 LINK startup 00:04:26.810 CXX test/cpp_headers/iscsi_spec.o 00:04:26.811 CC examples/idxd/perf/perf.o 00:04:26.811 CC examples/thread/thread/thread_ex.o 00:04:26.811 CXX test/cpp_headers/json.o 00:04:26.811 LINK spdk_nvme_identify 00:04:26.811 LINK spdk_dd 00:04:26.811 CC test/nvme/reserve/reserve.o 00:04:26.811 CXX test/cpp_headers/jsonrpc.o 00:04:27.069 LINK thread 00:04:27.069 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:27.069 LINK spdk_nvme 00:04:27.069 CC test/nvme/simple_copy/simple_copy.o 00:04:27.069 CXX test/cpp_headers/keyring.o 00:04:27.069 LINK idxd_perf 00:04:27.070 LINK reserve 00:04:27.070 CXX test/cpp_headers/keyring_module.o 00:04:27.327 CC app/fio/bdev/fio_plugin.o 00:04:27.327 LINK interrupt_tgt 00:04:27.327 LINK spdk_top 00:04:27.327 CC test/nvme/connect_stress/connect_stress.o 00:04:27.327 CXX test/cpp_headers/likely.o 00:04:27.327 CXX test/cpp_headers/log.o 00:04:27.327 CC test/nvme/boot_partition/boot_partition.o 00:04:27.327 LINK simple_copy 00:04:27.327 CXX test/cpp_headers/lvol.o 00:04:27.327 CC test/nvme/compliance/nvme_compliance.o 00:04:27.327 CXX test/cpp_headers/memory.o 00:04:27.585 CXX test/cpp_headers/mmio.o 00:04:27.585 CXX test/cpp_headers/nbd.o 00:04:27.585 LINK connect_stress 00:04:27.585 CXX test/cpp_headers/notify.o 00:04:27.585 LINK boot_partition 00:04:27.585 CXX test/cpp_headers/nvme.o 00:04:27.585 CXX test/cpp_headers/nvme_intel.o 00:04:27.585 CC test/nvme/fused_ordering/fused_ordering.o 00:04:27.923 CXX test/cpp_headers/nvme_ocssd.o 00:04:27.923 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:27.923 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:27.923 CXX test/cpp_headers/nvme_spec.o 00:04:27.923 CC test/nvme/fdp/fdp.o 00:04:27.923 LINK spdk_bdev 00:04:27.923 LINK fused_ordering 00:04:27.923 LINK nvme_compliance 00:04:27.923 CC test/nvme/cuse/cuse.o 00:04:27.923 CXX test/cpp_headers/nvme_zns.o 00:04:27.923 CXX test/cpp_headers/nvmf_cmd.o 00:04:27.923 LINK doorbell_aers 00:04:27.923 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:27.923 CXX test/cpp_headers/nvmf.o 00:04:28.210 CXX test/cpp_headers/nvmf_spec.o 00:04:28.210 CXX test/cpp_headers/nvmf_transport.o 00:04:28.210 CXX test/cpp_headers/opal.o 00:04:28.210 CXX test/cpp_headers/opal_spec.o 00:04:28.210 CXX test/cpp_headers/pci_ids.o 00:04:28.210 CXX test/cpp_headers/pipe.o 00:04:28.210 CXX test/cpp_headers/queue.o 00:04:28.210 CXX test/cpp_headers/reduce.o 00:04:28.210 LINK fdp 00:04:28.210 CXX test/cpp_headers/rpc.o 00:04:28.210 CXX test/cpp_headers/scheduler.o 00:04:28.210 CXX test/cpp_headers/scsi.o 00:04:28.210 CXX test/cpp_headers/scsi_spec.o 00:04:28.469 CXX test/cpp_headers/sock.o 00:04:28.469 CXX test/cpp_headers/stdinc.o 00:04:28.469 CXX test/cpp_headers/string.o 00:04:28.469 CXX test/cpp_headers/thread.o 00:04:28.469 CXX test/cpp_headers/trace.o 00:04:28.469 CXX test/cpp_headers/trace_parser.o 00:04:28.469 CXX test/cpp_headers/tree.o 00:04:28.469 CXX test/cpp_headers/ublk.o 00:04:28.469 CXX test/cpp_headers/util.o 00:04:28.469 CXX test/cpp_headers/uuid.o 00:04:28.469 CXX test/cpp_headers/version.o 00:04:28.469 CXX test/cpp_headers/vfio_user_pci.o 00:04:28.469 CXX test/cpp_headers/vfio_user_spec.o 00:04:28.469 CXX test/cpp_headers/vhost.o 00:04:28.469 CXX test/cpp_headers/vmd.o 00:04:28.728 CXX test/cpp_headers/xor.o 00:04:28.728 CXX test/cpp_headers/zipf.o 00:04:28.728 LINK esnap 00:04:29.292 LINK cuse 00:04:29.550 00:04:29.550 real 0m57.259s 00:04:29.550 user 5m15.567s 00:04:29.550 sys 1m10.395s 00:04:29.550 11:46:28 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:04:29.550 11:46:28 make -- common/autotest_common.sh@10 -- $ set +x 00:04:29.550 ************************************ 00:04:29.550 END TEST make 00:04:29.550 ************************************ 00:04:29.807 11:46:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:29.807 11:46:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:29.807 11:46:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:29.807 11:46:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.807 11:46:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:29.807 11:46:28 -- pm/common@44 -- $ pid=6135 00:04:29.807 11:46:28 -- pm/common@50 -- $ kill -TERM 6135 00:04:29.807 11:46:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:29.807 11:46:28 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:29.807 11:46:28 -- pm/common@44 -- $ pid=6137 00:04:29.807 11:46:28 -- pm/common@50 -- $ kill -TERM 6137 00:04:29.807 11:46:28 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:29.807 11:46:28 -- nvmf/common.sh@7 -- # uname -s 00:04:29.807 11:46:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:29.807 11:46:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:29.808 11:46:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:29.808 11:46:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:29.808 11:46:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:29.808 11:46:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:29.808 11:46:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:29.808 11:46:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:29.808 11:46:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:29.808 11:46:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:29.808 11:46:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:53483a59-0def-44b8-86f3-c10a14190d68 00:04:29.808 11:46:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=53483a59-0def-44b8-86f3-c10a14190d68 00:04:29.808 11:46:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:29.808 11:46:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:29.808 11:46:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:29.808 11:46:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:29.808 11:46:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:29.808 11:46:28 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:29.808 11:46:28 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.808 11:46:28 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.808 11:46:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.808 11:46:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.808 11:46:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.808 11:46:28 -- paths/export.sh@5 -- # export PATH 00:04:29.808 11:46:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.808 11:46:28 -- nvmf/common.sh@47 -- # : 0 00:04:29.808 11:46:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:29.808 11:46:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:29.808 11:46:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:29.808 11:46:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:29.808 11:46:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:29.808 11:46:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:29.808 11:46:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:29.808 11:46:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:29.808 11:46:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:29.808 11:46:28 -- spdk/autotest.sh@32 -- # uname -s 00:04:29.808 11:46:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:29.808 11:46:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:29.808 11:46:28 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:29.808 11:46:28 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:29.808 11:46:28 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:29.808 11:46:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:30.065 11:46:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:30.065 11:46:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:30.065 11:46:28 -- spdk/autotest.sh@48 -- # udevadm_pid=66221 00:04:30.065 11:46:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:30.065 11:46:28 -- pm/common@17 -- # local monitor 00:04:30.065 11:46:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.065 11:46:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:30.065 11:46:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:30.065 11:46:28 -- pm/common@21 -- # date +%s 00:04:30.065 11:46:28 -- pm/common@25 -- # sleep 1 00:04:30.065 11:46:28 -- pm/common@21 -- # date +%s 00:04:30.065 11:46:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721562388 00:04:30.065 11:46:28 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721562388 00:04:30.065 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721562388_collect-vmstat.pm.log 00:04:30.065 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721562388_collect-cpu-load.pm.log 00:04:31.000 11:46:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:31.000 11:46:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:31.000 11:46:29 -- common/autotest_common.sh@720 -- # xtrace_disable 00:04:31.000 11:46:29 -- common/autotest_common.sh@10 -- # set +x 00:04:31.000 11:46:29 -- spdk/autotest.sh@59 -- # create_test_list 00:04:31.000 11:46:29 -- common/autotest_common.sh@744 -- # xtrace_disable 00:04:31.000 11:46:29 -- common/autotest_common.sh@10 -- # set +x 00:04:31.000 11:46:29 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:31.000 11:46:29 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:31.000 11:46:29 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:31.000 11:46:29 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:31.000 11:46:29 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:31.000 11:46:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:31.000 11:46:29 -- common/autotest_common.sh@1451 -- # uname 00:04:31.000 11:46:29 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:04:31.000 11:46:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:31.000 11:46:29 -- common/autotest_common.sh@1471 -- # uname 00:04:31.000 11:46:29 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:04:31.000 11:46:29 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:04:31.000 11:46:29 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:04:31.000 11:46:29 -- spdk/autotest.sh@72 -- # hash lcov 00:04:31.000 11:46:29 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:31.000 11:46:29 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:04:31.000 --rc lcov_branch_coverage=1 00:04:31.000 --rc lcov_function_coverage=1 00:04:31.000 --rc genhtml_branch_coverage=1 00:04:31.000 --rc genhtml_function_coverage=1 00:04:31.000 --rc genhtml_legend=1 00:04:31.000 --rc geninfo_all_blocks=1 00:04:31.000 ' 00:04:31.000 11:46:29 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:04:31.000 --rc lcov_branch_coverage=1 00:04:31.000 --rc lcov_function_coverage=1 00:04:31.000 --rc genhtml_branch_coverage=1 00:04:31.000 --rc genhtml_function_coverage=1 00:04:31.000 --rc genhtml_legend=1 00:04:31.000 --rc geninfo_all_blocks=1 00:04:31.000 ' 00:04:31.000 11:46:29 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:04:31.000 --rc lcov_branch_coverage=1 00:04:31.000 --rc lcov_function_coverage=1 00:04:31.000 --rc genhtml_branch_coverage=1 00:04:31.000 --rc genhtml_function_coverage=1 00:04:31.000 --rc genhtml_legend=1 00:04:31.000 --rc geninfo_all_blocks=1 00:04:31.000 --no-external' 00:04:31.000 11:46:29 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:04:31.000 --rc lcov_branch_coverage=1 00:04:31.000 --rc lcov_function_coverage=1 00:04:31.000 --rc genhtml_branch_coverage=1 00:04:31.000 --rc genhtml_function_coverage=1 00:04:31.000 --rc genhtml_legend=1 00:04:31.000 --rc geninfo_all_blocks=1 00:04:31.000 --no-external' 00:04:31.000 11:46:29 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:31.259 lcov: LCOV version 1.14 00:04:31.259 11:46:29 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:46.140 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:46.140 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:04:56.116 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:56.116 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:04:56.376 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:56.376 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:04:56.376 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:04:56.376 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:04:56.376 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:56.376 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:04:56.376 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:56.376 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:04:56.376 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:56.376 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:04:56.376 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:56.376 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:04:56.376 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:56.376 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:04:56.376 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:56.376 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:04:56.376 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:04:56.376 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:04:56.376 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:56.376 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:04:56.376 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:56.376 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:04:56.376 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:56.376 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:56.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:04:56.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:04:56.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:56.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:04:56.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:56.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:04:56.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:56.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:04:56.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:56.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:04:56.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:04:56.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:04:56.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:56.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:04:56.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:56.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:04:56.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:56.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:04:56.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:56.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:04:56.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:56.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:04:56.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:56.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:04:56.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:56.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:04:56.377 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:56.377 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:56.637 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:04:56.637 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:56.638 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:04:56.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:56.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:04:56.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:04:56.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:04:56.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:56.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:04:56.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:04:56.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:04:56.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:56.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:56.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:56.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:56.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:56.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:04:56.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:56.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:04:56.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:56.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:04:56.897 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:56.897 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:00.186 11:46:58 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:00.186 11:46:58 -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:00.186 11:46:58 -- common/autotest_common.sh@10 -- # set +x 00:05:00.186 11:46:58 -- spdk/autotest.sh@91 -- # rm -f 00:05:00.186 11:46:58 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.444 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.381 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:01.381 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:01.381 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:05:01.381 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:05:01.381 11:46:59 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:01.381 11:46:59 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:01.381 11:46:59 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:01.381 11:46:59 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:01.381 11:46:59 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:01.381 11:46:59 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:01.381 11:46:59 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:01.381 11:46:59 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:01.381 11:46:59 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:01.381 11:46:59 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:01.381 11:46:59 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:01.381 11:46:59 -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:01.381 11:46:59 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:01.381 11:46:59 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:01.381 11:46:59 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:01.381 11:46:59 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:05:01.381 11:46:59 -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:05:01.381 11:46:59 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:01.381 11:46:59 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:01.381 11:46:59 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:01.381 11:47:00 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:05:01.381 11:47:00 -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:05:01.381 11:47:00 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:01.381 11:47:00 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:01.381 11:47:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:01.381 11:47:00 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2c2n1 00:05:01.381 11:47:00 -- common/autotest_common.sh@1658 -- # local device=nvme2c2n1 00:05:01.381 11:47:00 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:05:01.381 11:47:00 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:01.381 11:47:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:01.381 11:47:00 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:01.381 11:47:00 -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:01.381 11:47:00 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:01.381 11:47:00 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:01.381 11:47:00 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:01.381 11:47:00 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:01.381 11:47:00 -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:01.381 11:47:00 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:01.381 11:47:00 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:01.381 11:47:00 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:01.381 11:47:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.381 11:47:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:01.381 11:47:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:01.381 11:47:00 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:01.381 11:47:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:01.381 No valid GPT data, bailing 00:05:01.381 11:47:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:01.381 11:47:00 -- scripts/common.sh@391 -- # pt= 00:05:01.381 11:47:00 -- scripts/common.sh@392 -- # return 1 00:05:01.381 11:47:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:01.381 1+0 records in 00:05:01.381 1+0 records out 00:05:01.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143984 s, 72.8 MB/s 00:05:01.381 11:47:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.381 11:47:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:01.381 11:47:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:01.381 11:47:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:01.381 11:47:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:01.381 No valid GPT data, bailing 00:05:01.381 11:47:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:01.381 11:47:00 -- scripts/common.sh@391 -- # pt= 00:05:01.381 11:47:00 -- scripts/common.sh@392 -- # return 1 00:05:01.381 11:47:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:01.381 1+0 records in 00:05:01.381 1+0 records out 00:05:01.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0067832 s, 155 MB/s 00:05:01.381 11:47:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.381 11:47:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:01.381 11:47:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:01.381 11:47:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:01.381 11:47:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:01.381 No valid GPT data, bailing 00:05:01.640 11:47:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:01.640 11:47:00 -- scripts/common.sh@391 -- # pt= 00:05:01.640 11:47:00 -- scripts/common.sh@392 -- # return 1 00:05:01.640 11:47:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:01.640 1+0 records in 00:05:01.640 1+0 records out 00:05:01.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00351634 s, 298 MB/s 00:05:01.640 11:47:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.640 11:47:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:01.640 11:47:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:01.640 11:47:00 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:01.640 11:47:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:01.640 No valid GPT data, bailing 00:05:01.640 11:47:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:01.640 11:47:00 -- scripts/common.sh@391 -- # pt= 00:05:01.640 11:47:00 -- scripts/common.sh@392 -- # return 1 00:05:01.640 11:47:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:01.640 1+0 records in 00:05:01.640 1+0 records out 00:05:01.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00554433 s, 189 MB/s 00:05:01.640 11:47:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.640 11:47:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:01.640 11:47:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:05:01.640 11:47:00 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:05:01.640 11:47:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:05:01.640 No valid GPT data, bailing 00:05:01.640 11:47:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:01.640 11:47:00 -- scripts/common.sh@391 -- # pt= 00:05:01.640 11:47:00 -- scripts/common.sh@392 -- # return 1 00:05:01.640 11:47:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:05:01.640 1+0 records in 00:05:01.640 1+0 records out 00:05:01.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0062956 s, 167 MB/s 00:05:01.640 11:47:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:01.640 11:47:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:01.640 11:47:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme3n1 00:05:01.640 11:47:00 -- scripts/common.sh@378 -- # local block=/dev/nvme3n1 pt 00:05:01.640 11:47:00 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:05:01.640 No valid GPT data, bailing 00:05:01.640 11:47:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:01.640 11:47:00 -- scripts/common.sh@391 -- # pt= 00:05:01.640 11:47:00 -- scripts/common.sh@392 -- # return 1 00:05:01.640 11:47:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:05:01.640 1+0 records in 00:05:01.640 1+0 records out 00:05:01.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621656 s, 169 MB/s 00:05:01.641 11:47:00 -- spdk/autotest.sh@118 -- # sync 00:05:01.899 11:47:00 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:01.899 11:47:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:01.899 11:47:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:04.440 11:47:03 -- spdk/autotest.sh@124 -- # uname -s 00:05:04.440 11:47:03 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:04.440 11:47:03 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:04.440 11:47:03 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.440 11:47:03 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.440 11:47:03 -- common/autotest_common.sh@10 -- # set +x 00:05:04.440 ************************************ 00:05:04.440 START TEST setup.sh 00:05:04.440 ************************************ 00:05:04.440 11:47:03 setup.sh -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:04.440 * Looking for test storage... 00:05:04.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:04.440 11:47:03 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:05:04.440 11:47:03 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:04.440 11:47:03 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:04.440 11:47:03 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:04.440 11:47:03 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.440 11:47:03 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:04.440 ************************************ 00:05:04.440 START TEST acl 00:05:04.440 ************************************ 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:04.440 * Looking for test storage... 00:05:04.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:04.440 11:47:03 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n2 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n2 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n3 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme1n3 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2c2n1 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2c2n1 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:04.440 11:47:03 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:04.440 11:47:03 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:05:04.440 11:47:03 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:05:04.440 11:47:03 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:05:04.440 11:47:03 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:05:04.440 11:47:03 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:05:04.440 11:47:03 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:04.440 11:47:03 setup.sh.acl -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:05.816 11:47:04 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:05:05.816 11:47:04 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:05:05.816 11:47:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:05.816 11:47:04 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:05:05.816 11:47:04 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.816 11:47:04 setup.sh.acl -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:06.382 11:47:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:06.382 11:47:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:06.382 11:47:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.950 Hugepages 00:05:06.950 node hugesize free / total 00:05:06.950 11:47:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:06.950 11:47:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:06.950 11:47:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.950 00:05:06.950 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:06.950 11:47:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:06.950 11:47:05 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:05:06.950 11:47:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:06.950 11:47:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:06.950 11:47:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:06.950 11:47:05 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:05:06.950 11:47:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:07.209 11:47:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:07.209 11:47:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:07.209 11:47:05 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:07.209 11:47:05 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:07.209 11:47:05 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:07.209 11:47:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:07.209 11:47:05 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:07.209 11:47:05 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:07.209 11:47:05 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:07.209 11:47:05 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:07.209 11:47:05 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:07.209 11:47:05 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:07.467 11:47:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:12.0 == *:*:*.* ]] 00:05:07.467 11:47:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:07.467 11:47:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:07.467 11:47:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:07.468 11:47:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:07.468 11:47:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:07.468 11:47:06 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:13.0 == *:*:*.* ]] 00:05:07.468 11:47:06 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:07.468 11:47:06 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:07.468 11:47:06 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:07.468 11:47:06 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:07.468 11:47:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:07.468 11:47:06 setup.sh.acl -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:05:07.468 11:47:06 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:05:07.468 11:47:06 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:07.468 11:47:06 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.468 11:47:06 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:07.468 ************************************ 00:05:07.468 START TEST denied 00:05:07.468 ************************************ 00:05:07.468 11:47:06 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:05:07.468 11:47:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:07.468 11:47:06 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:05:07.468 11:47:06 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:07.468 11:47:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:05:07.468 11:47:06 setup.sh.acl.denied -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:08.839 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:08.839 11:47:07 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:08.839 11:47:07 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:05:08.839 11:47:07 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:05:08.839 11:47:07 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:08.839 11:47:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:08.839 11:47:07 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:08.839 11:47:07 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:08.839 11:47:07 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:05:08.839 11:47:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:08.839 11:47:07 setup.sh.acl.denied -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.411 00:05:15.411 real 0m7.493s 00:05:15.411 user 0m0.908s 00:05:15.411 sys 0m1.663s 00:05:15.411 11:47:13 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:15.411 11:47:13 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:05:15.411 ************************************ 00:05:15.411 END TEST denied 00:05:15.411 ************************************ 00:05:15.411 11:47:13 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:15.411 11:47:13 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:15.411 11:47:13 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:15.411 11:47:13 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:15.411 ************************************ 00:05:15.411 START TEST allowed 00:05:15.411 ************************************ 00:05:15.411 11:47:13 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:05:15.411 11:47:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:15.411 11:47:13 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:15.411 11:47:13 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:05:15.411 11:47:13 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.411 11:47:13 setup.sh.acl.allowed -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.345 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:12.0 ]] 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:12.0/driver 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:13.0 ]] 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:13.0/driver 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.345 11:47:15 setup.sh.acl.allowed -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.719 00:05:17.719 real 0m2.645s 00:05:17.719 user 0m1.076s 00:05:17.719 sys 0m1.581s 00:05:17.719 11:47:16 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.719 11:47:16 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:05:17.719 ************************************ 00:05:17.719 END TEST allowed 00:05:17.719 ************************************ 00:05:17.719 ************************************ 00:05:17.719 END TEST acl 00:05:17.719 ************************************ 00:05:17.719 00:05:17.719 real 0m13.295s 00:05:17.719 user 0m3.318s 00:05:17.719 sys 0m5.078s 00:05:17.719 11:47:16 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:17.719 11:47:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:05:17.719 11:47:16 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:17.719 11:47:16 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.719 11:47:16 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.719 11:47:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:17.719 ************************************ 00:05:17.719 START TEST hugepages 00:05:17.719 ************************************ 00:05:17.719 11:47:16 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:17.979 * Looking for test storage... 00:05:17.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.979 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 4408720 kB' 'MemAvailable: 7385884 kB' 'Buffers: 2436 kB' 'Cached: 3179916 kB' 'SwapCached: 0 kB' 'Active: 449236 kB' 'Inactive: 2839812 kB' 'Active(anon): 117212 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839812 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 248 kB' 'Writeback: 0 kB' 'AnonPages: 108012 kB' 'Mapped: 48768 kB' 'Shmem: 10516 kB' 'KReclaimable: 84532 kB' 'Slab: 166676 kB' 'SReclaimable: 84532 kB' 'SUnreclaim: 82144 kB' 'KernelStack: 6460 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 331816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.980 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:17.981 11:47:16 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:17.981 11:47:16 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:17.981 11:47:16 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:17.981 11:47:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:17.981 ************************************ 00:05:17.981 START TEST default_setup 00:05:17.981 ************************************ 00:05:17.981 11:47:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:05:17.981 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:17.981 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:05:17.981 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:17.981 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:05:17.981 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:17.981 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.982 11:47:16 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.487 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.487 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.487 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.487 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6517524 kB' 'MemAvailable: 9494432 kB' 'Buffers: 2436 kB' 'Cached: 3179908 kB' 'SwapCached: 0 kB' 'Active: 461756 kB' 'Inactive: 2839836 kB' 'Active(anon): 129732 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 121124 kB' 'Mapped: 48840 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 166000 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82028 kB' 'KernelStack: 6496 kB' 'PageTables: 4184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55112 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.487 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.488 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6517272 kB' 'MemAvailable: 9494180 kB' 'Buffers: 2436 kB' 'Cached: 3179908 kB' 'SwapCached: 0 kB' 'Active: 461768 kB' 'Inactive: 2839836 kB' 'Active(anon): 129744 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120868 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 166000 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82028 kB' 'KernelStack: 6496 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.489 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6517512 kB' 'MemAvailable: 9494420 kB' 'Buffers: 2436 kB' 'Cached: 3179908 kB' 'SwapCached: 0 kB' 'Active: 461876 kB' 'Inactive: 2839836 kB' 'Active(anon): 129852 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120980 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 165988 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82016 kB' 'KernelStack: 6496 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.490 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.491 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.492 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.753 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.753 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.753 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:19.754 nr_hugepages=1024 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:19.754 resv_hugepages=0 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.754 surplus_hugepages=0 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.754 anon_hugepages=0 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6518444 kB' 'MemAvailable: 9495352 kB' 'Buffers: 2436 kB' 'Cached: 3179908 kB' 'SwapCached: 0 kB' 'Active: 461760 kB' 'Inactive: 2839836 kB' 'Active(anon): 129736 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'AnonPages: 120864 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 165984 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82012 kB' 'KernelStack: 6480 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 348992 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55096 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.754 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:05:19.755 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6518444 kB' 'MemUsed: 5723532 kB' 'SwapCached: 0 kB' 'Active: 461764 kB' 'Inactive: 2839836 kB' 'Active(anon): 129740 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839836 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 268 kB' 'Writeback: 0 kB' 'FilePages: 3182344 kB' 'Mapped: 48712 kB' 'AnonPages: 120864 kB' 'Shmem: 10476 kB' 'KernelStack: 6480 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83972 kB' 'Slab: 165984 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82012 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.756 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:19.757 node0=1024 expecting 1024 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:19.757 00:05:19.757 real 0m1.690s 00:05:19.757 user 0m0.682s 00:05:19.757 sys 0m0.995s 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.757 11:47:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:05:19.757 ************************************ 00:05:19.757 END TEST default_setup 00:05:19.757 ************************************ 00:05:19.757 11:47:18 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:19.757 11:47:18 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:19.757 11:47:18 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.757 11:47:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:19.757 ************************************ 00:05:19.757 START TEST per_node_1G_alloc 00:05:19.757 ************************************ 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.757 11:47:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.326 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.588 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.588 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.588 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.588 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7573512 kB' 'MemAvailable: 10550428 kB' 'Buffers: 2436 kB' 'Cached: 3179908 kB' 'SwapCached: 0 kB' 'Active: 461888 kB' 'Inactive: 2839844 kB' 'Active(anon): 129864 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 121000 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 166036 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82064 kB' 'KernelStack: 6528 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.588 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.589 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7573736 kB' 'MemAvailable: 10550652 kB' 'Buffers: 2436 kB' 'Cached: 3179908 kB' 'SwapCached: 0 kB' 'Active: 461580 kB' 'Inactive: 2839844 kB' 'Active(anon): 129556 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 120956 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 166032 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82060 kB' 'KernelStack: 6512 kB' 'PageTables: 4252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.590 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.591 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7573944 kB' 'MemAvailable: 10550860 kB' 'Buffers: 2436 kB' 'Cached: 3179908 kB' 'SwapCached: 0 kB' 'Active: 461876 kB' 'Inactive: 2839844 kB' 'Active(anon): 129852 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 120988 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 166016 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82044 kB' 'KernelStack: 6496 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.592 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.593 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:20.594 nr_hugepages=512 00:05:20.594 resv_hugepages=0 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.594 surplus_hugepages=0 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.594 anon_hugepages=0 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7573944 kB' 'MemAvailable: 10550860 kB' 'Buffers: 2436 kB' 'Cached: 3179908 kB' 'SwapCached: 0 kB' 'Active: 461824 kB' 'Inactive: 2839844 kB' 'Active(anon): 129800 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'AnonPages: 120892 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 166016 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82044 kB' 'KernelStack: 6480 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 349120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.594 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 512 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:05:20.595 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7573944 kB' 'MemUsed: 4668032 kB' 'SwapCached: 0 kB' 'Active: 461828 kB' 'Inactive: 2839844 kB' 'Active(anon): 129804 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839844 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 188 kB' 'Writeback: 0 kB' 'FilePages: 3182344 kB' 'Mapped: 48712 kB' 'AnonPages: 120892 kB' 'Shmem: 10476 kB' 'KernelStack: 6480 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83972 kB' 'Slab: 166016 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82044 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.596 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.597 node0=512 expecting 512 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:20.597 00:05:20.597 real 0m0.971s 00:05:20.597 user 0m0.431s 00:05:20.597 sys 0m0.601s 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:20.597 11:47:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:20.597 ************************************ 00:05:20.597 END TEST per_node_1G_alloc 00:05:20.597 ************************************ 00:05:20.858 11:47:19 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:20.858 11:47:19 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:20.858 11:47:19 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:20.858 11:47:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:20.858 ************************************ 00:05:20.858 START TEST even_2G_alloc 00:05:20.858 ************************************ 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.858 11:47:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.428 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.428 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.428 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.428 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.428 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6525524 kB' 'MemAvailable: 9502444 kB' 'Buffers: 2436 kB' 'Cached: 3179912 kB' 'SwapCached: 0 kB' 'Active: 461816 kB' 'Inactive: 2839848 kB' 'Active(anon): 129792 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 120888 kB' 'Mapped: 48836 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 166000 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82028 kB' 'KernelStack: 6504 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.428 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.693 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6526824 kB' 'MemAvailable: 9503744 kB' 'Buffers: 2436 kB' 'Cached: 3179912 kB' 'SwapCached: 0 kB' 'Active: 461620 kB' 'Inactive: 2839848 kB' 'Active(anon): 129596 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 120988 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 166016 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82044 kB' 'KernelStack: 6496 kB' 'PageTables: 4176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.694 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.695 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6526616 kB' 'MemAvailable: 9503536 kB' 'Buffers: 2436 kB' 'Cached: 3179912 kB' 'SwapCached: 0 kB' 'Active: 461856 kB' 'Inactive: 2839848 kB' 'Active(anon): 129832 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 120992 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 166016 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82044 kB' 'KernelStack: 6496 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.696 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.697 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:21.698 nr_hugepages=1024 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.698 resv_hugepages=0 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.698 surplus_hugepages=0 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.698 anon_hugepages=0 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6526616 kB' 'MemAvailable: 9503536 kB' 'Buffers: 2436 kB' 'Cached: 3179912 kB' 'SwapCached: 0 kB' 'Active: 461576 kB' 'Inactive: 2839848 kB' 'Active(anon): 129552 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'AnonPages: 120932 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 166016 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82044 kB' 'KernelStack: 6464 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 349120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55128 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.698 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.699 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6526616 kB' 'MemUsed: 5715360 kB' 'SwapCached: 0 kB' 'Active: 461828 kB' 'Inactive: 2839848 kB' 'Active(anon): 129804 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 200 kB' 'Writeback: 0 kB' 'FilePages: 3182348 kB' 'Mapped: 48712 kB' 'AnonPages: 120916 kB' 'Shmem: 10476 kB' 'KernelStack: 6480 kB' 'PageTables: 4132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83972 kB' 'Slab: 166008 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82036 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.700 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:21.701 node0=1024 expecting 1024 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:21.701 00:05:21.701 real 0m0.961s 00:05:21.701 user 0m0.421s 00:05:21.701 sys 0m0.589s 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.701 11:47:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:21.701 ************************************ 00:05:21.701 END TEST even_2G_alloc 00:05:21.701 ************************************ 00:05:21.701 11:47:20 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:21.701 11:47:20 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:21.701 11:47:20 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.701 11:47:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:21.701 ************************************ 00:05:21.701 START TEST odd_alloc 00:05:21.701 ************************************ 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.701 11:47:20 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.529 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.530 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.530 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.530 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6524580 kB' 'MemAvailable: 9501500 kB' 'Buffers: 2436 kB' 'Cached: 3179912 kB' 'SwapCached: 0 kB' 'Active: 462160 kB' 'Inactive: 2839848 kB' 'Active(anon): 130136 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120972 kB' 'Mapped: 48844 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 166044 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82072 kB' 'KernelStack: 6512 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.530 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6524580 kB' 'MemAvailable: 9501500 kB' 'Buffers: 2436 kB' 'Cached: 3179912 kB' 'SwapCached: 0 kB' 'Active: 461860 kB' 'Inactive: 2839848 kB' 'Active(anon): 129836 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120932 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 166040 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82068 kB' 'KernelStack: 6480 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55144 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.531 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.532 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6524332 kB' 'MemAvailable: 9501252 kB' 'Buffers: 2436 kB' 'Cached: 3179912 kB' 'SwapCached: 0 kB' 'Active: 461864 kB' 'Inactive: 2839848 kB' 'Active(anon): 129840 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120932 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 166040 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82068 kB' 'KernelStack: 6480 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.796 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.797 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:22.798 nr_hugepages=1025 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.798 resv_hugepages=0 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.798 surplus_hugepages=0 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.798 anon_hugepages=0 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6524332 kB' 'MemAvailable: 9501252 kB' 'Buffers: 2436 kB' 'Cached: 3179912 kB' 'SwapCached: 0 kB' 'Active: 461600 kB' 'Inactive: 2839848 kB' 'Active(anon): 129576 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'AnonPages: 120932 kB' 'Mapped: 48712 kB' 'Shmem: 10476 kB' 'KReclaimable: 83972 kB' 'Slab: 166044 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82072 kB' 'KernelStack: 6480 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 349120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55160 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.798 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.799 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6524584 kB' 'MemUsed: 5717392 kB' 'SwapCached: 0 kB' 'Active: 461864 kB' 'Inactive: 2839848 kB' 'Active(anon): 129840 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 208 kB' 'Writeback: 0 kB' 'FilePages: 3182348 kB' 'Mapped: 48712 kB' 'AnonPages: 120932 kB' 'Shmem: 10476 kB' 'KernelStack: 6480 kB' 'PageTables: 4120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83972 kB' 'Slab: 166044 kB' 'SReclaimable: 83972 kB' 'SUnreclaim: 82072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.800 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:22.801 node0=1025 expecting 1025 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:22.801 00:05:22.801 real 0m0.986s 00:05:22.801 user 0m0.415s 00:05:22.801 sys 0m0.605s 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.801 ************************************ 00:05:22.801 END TEST odd_alloc 00:05:22.801 ************************************ 00:05:22.801 11:47:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:22.801 11:47:21 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:22.801 11:47:21 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:22.801 11:47:21 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.801 11:47:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:22.801 ************************************ 00:05:22.801 START TEST custom_alloc 00:05:22.801 ************************************ 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:22.801 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:22.802 11:47:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:23.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:23.632 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.632 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.632 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.632 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7573116 kB' 'MemAvailable: 10550032 kB' 'Buffers: 2436 kB' 'Cached: 3179912 kB' 'SwapCached: 0 kB' 'Active: 459080 kB' 'Inactive: 2839848 kB' 'Active(anon): 127056 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 118152 kB' 'Mapped: 48100 kB' 'Shmem: 10476 kB' 'KReclaimable: 83968 kB' 'Slab: 165920 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81952 kB' 'KernelStack: 6440 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.632 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:23.633 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7573116 kB' 'MemAvailable: 10550032 kB' 'Buffers: 2436 kB' 'Cached: 3179912 kB' 'SwapCached: 0 kB' 'Active: 458288 kB' 'Inactive: 2839848 kB' 'Active(anon): 126264 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 117624 kB' 'Mapped: 47972 kB' 'Shmem: 10476 kB' 'KReclaimable: 83968 kB' 'Slab: 165908 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81940 kB' 'KernelStack: 6400 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.634 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7573116 kB' 'MemAvailable: 10550032 kB' 'Buffers: 2436 kB' 'Cached: 3179912 kB' 'SwapCached: 0 kB' 'Active: 458296 kB' 'Inactive: 2839848 kB' 'Active(anon): 126272 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 117624 kB' 'Mapped: 47972 kB' 'Shmem: 10476 kB' 'KReclaimable: 83968 kB' 'Slab: 165904 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81936 kB' 'KernelStack: 6400 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.635 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.636 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:23.637 nr_hugepages=512 00:05:23.637 resv_hugepages=0 00:05:23.637 surplus_hugepages=0 00:05:23.637 anon_hugepages=0 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7573116 kB' 'MemAvailable: 10550032 kB' 'Buffers: 2436 kB' 'Cached: 3179912 kB' 'SwapCached: 0 kB' 'Active: 458292 kB' 'Inactive: 2839848 kB' 'Active(anon): 126268 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'AnonPages: 117624 kB' 'Mapped: 47972 kB' 'Shmem: 10476 kB' 'KReclaimable: 83968 kB' 'Slab: 165904 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81936 kB' 'KernelStack: 6400 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 335980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.637 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.898 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.899 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 512 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7573116 kB' 'MemUsed: 4668860 kB' 'SwapCached: 0 kB' 'Active: 458556 kB' 'Inactive: 2839848 kB' 'Active(anon): 126532 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839848 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 216 kB' 'Writeback: 0 kB' 'FilePages: 3182348 kB' 'Mapped: 47972 kB' 'AnonPages: 117624 kB' 'Shmem: 10476 kB' 'KernelStack: 6400 kB' 'PageTables: 3736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83968 kB' 'Slab: 165904 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.900 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.901 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:23.902 node0=512 expecting 512 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:23.902 00:05:23.902 real 0m0.971s 00:05:23.902 user 0m0.411s 00:05:23.902 sys 0m0.584s 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.902 11:47:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:23.902 ************************************ 00:05:23.902 END TEST custom_alloc 00:05:23.902 ************************************ 00:05:23.902 11:47:22 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:23.902 11:47:22 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.902 11:47:22 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.902 11:47:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:23.902 ************************************ 00:05:23.902 START TEST no_shrink_alloc 00:05:23.902 ************************************ 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.902 11:47:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:24.469 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:24.731 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.731 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.731 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.731 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6534856 kB' 'MemAvailable: 9511780 kB' 'Buffers: 2436 kB' 'Cached: 3179920 kB' 'SwapCached: 0 kB' 'Active: 458892 kB' 'Inactive: 2839856 kB' 'Active(anon): 126868 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 117692 kB' 'Mapped: 48060 kB' 'Shmem: 10476 kB' 'KReclaimable: 83968 kB' 'Slab: 165808 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81840 kB' 'KernelStack: 6400 kB' 'PageTables: 3752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55080 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.731 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6534856 kB' 'MemAvailable: 9511780 kB' 'Buffers: 2436 kB' 'Cached: 3179920 kB' 'SwapCached: 0 kB' 'Active: 459080 kB' 'Inactive: 2839856 kB' 'Active(anon): 127056 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 117900 kB' 'Mapped: 48060 kB' 'Shmem: 10476 kB' 'KReclaimable: 83968 kB' 'Slab: 165800 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81832 kB' 'KernelStack: 6384 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.732 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.733 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6534100 kB' 'MemAvailable: 9511024 kB' 'Buffers: 2436 kB' 'Cached: 3179920 kB' 'SwapCached: 0 kB' 'Active: 458828 kB' 'Inactive: 2839856 kB' 'Active(anon): 126804 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 117656 kB' 'Mapped: 48060 kB' 'Shmem: 10476 kB' 'KReclaimable: 83968 kB' 'Slab: 165800 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81832 kB' 'KernelStack: 6384 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.734 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.735 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.736 nr_hugepages=1024 00:05:24.736 resv_hugepages=0 00:05:24.736 surplus_hugepages=0 00:05:24.736 anon_hugepages=0 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6534100 kB' 'MemAvailable: 9511024 kB' 'Buffers: 2436 kB' 'Cached: 3179920 kB' 'SwapCached: 0 kB' 'Active: 458628 kB' 'Inactive: 2839856 kB' 'Active(anon): 126604 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'AnonPages: 117500 kB' 'Mapped: 48060 kB' 'Shmem: 10476 kB' 'KReclaimable: 83968 kB' 'Slab: 165796 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81828 kB' 'KernelStack: 6400 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.736 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.737 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6534100 kB' 'MemUsed: 5707876 kB' 'SwapCached: 0 kB' 'Active: 458368 kB' 'Inactive: 2839856 kB' 'Active(anon): 126344 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839856 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 224 kB' 'Writeback: 0 kB' 'FilePages: 3182356 kB' 'Mapped: 48060 kB' 'AnonPages: 117760 kB' 'Shmem: 10476 kB' 'KernelStack: 6400 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83968 kB' 'Slab: 165796 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.738 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:24.739 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:24.740 node0=1024 expecting 1024 00:05:24.740 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:24.740 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:24.740 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:24.740 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:24.740 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:05:24.740 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:05:24.740 11:47:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:25.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:25.567 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.567 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.567 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.567 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:25.567 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6534220 kB' 'MemAvailable: 9511140 kB' 'Buffers: 2436 kB' 'Cached: 3179916 kB' 'SwapCached: 0 kB' 'Active: 458908 kB' 'Inactive: 2839852 kB' 'Active(anon): 126884 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117944 kB' 'Mapped: 48100 kB' 'Shmem: 10476 kB' 'KReclaimable: 83968 kB' 'Slab: 165804 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81836 kB' 'KernelStack: 6356 kB' 'PageTables: 3644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.567 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.568 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6533968 kB' 'MemAvailable: 9510888 kB' 'Buffers: 2436 kB' 'Cached: 3179916 kB' 'SwapCached: 0 kB' 'Active: 458600 kB' 'Inactive: 2839852 kB' 'Active(anon): 126576 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117672 kB' 'Mapped: 47976 kB' 'Shmem: 10476 kB' 'KReclaimable: 83968 kB' 'Slab: 165800 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81832 kB' 'KernelStack: 6400 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.569 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.570 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6533968 kB' 'MemAvailable: 9510888 kB' 'Buffers: 2436 kB' 'Cached: 3179916 kB' 'SwapCached: 0 kB' 'Active: 458396 kB' 'Inactive: 2839852 kB' 'Active(anon): 126372 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117728 kB' 'Mapped: 47976 kB' 'Shmem: 10476 kB' 'KReclaimable: 83968 kB' 'Slab: 165800 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81832 kB' 'KernelStack: 6400 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55048 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.571 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.572 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:25.573 nr_hugepages=1024 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:25.573 resv_hugepages=0 00:05:25.573 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:25.573 surplus_hugepages=0 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:25.833 anon_hugepages=0 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6533968 kB' 'MemAvailable: 9510888 kB' 'Buffers: 2436 kB' 'Cached: 3179916 kB' 'SwapCached: 0 kB' 'Active: 458592 kB' 'Inactive: 2839852 kB' 'Active(anon): 126568 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 240 kB' 'Writeback: 0 kB' 'AnonPages: 117664 kB' 'Mapped: 47976 kB' 'Shmem: 10476 kB' 'KReclaimable: 83968 kB' 'Slab: 165800 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81832 kB' 'KernelStack: 6384 kB' 'PageTables: 3688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 335980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55064 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 180076 kB' 'DirectMap2M: 5062656 kB' 'DirectMap1G: 9437184 kB' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.833 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:25.834 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6533968 kB' 'MemUsed: 5708008 kB' 'SwapCached: 0 kB' 'Active: 458604 kB' 'Inactive: 2839852 kB' 'Active(anon): 126580 kB' 'Inactive(anon): 0 kB' 'Active(file): 332024 kB' 'Inactive(file): 2839852 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3182352 kB' 'Mapped: 47976 kB' 'AnonPages: 117672 kB' 'Shmem: 10476 kB' 'KernelStack: 6400 kB' 'PageTables: 3732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 83968 kB' 'Slab: 165800 kB' 'SReclaimable: 83968 kB' 'SUnreclaim: 81832 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.835 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:25.836 node0=1024 expecting 1024 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:25.836 00:05:25.836 real 0m1.881s 00:05:25.836 user 0m0.762s 00:05:25.836 sys 0m1.195s 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.836 11:47:24 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:25.836 ************************************ 00:05:25.836 END TEST no_shrink_alloc 00:05:25.836 ************************************ 00:05:25.836 11:47:24 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:25.836 11:47:24 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:25.836 11:47:24 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:25.836 11:47:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:25.836 11:47:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:25.836 11:47:24 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:25.836 11:47:24 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:25.836 11:47:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:25.836 11:47:24 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:25.836 ************************************ 00:05:25.836 END TEST hugepages 00:05:25.836 ************************************ 00:05:25.836 00:05:25.836 real 0m8.051s 00:05:25.836 user 0m3.327s 00:05:25.836 sys 0m4.945s 00:05:25.836 11:47:24 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.836 11:47:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:25.836 11:47:24 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:25.836 11:47:24 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.836 11:47:24 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.836 11:47:24 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:25.836 ************************************ 00:05:25.836 START TEST driver 00:05:25.836 ************************************ 00:05:25.836 11:47:24 setup.sh.driver -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:26.093 * Looking for test storage... 00:05:26.093 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:26.093 11:47:24 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:26.093 11:47:24 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:26.093 11:47:24 setup.sh.driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:32.652 11:47:30 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:32.652 11:47:30 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:32.652 11:47:30 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:32.652 11:47:30 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:32.652 ************************************ 00:05:32.652 START TEST guess_driver 00:05:32.652 ************************************ 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@32 -- # return 1 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@38 -- # uio 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:32.652 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:32.653 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:32.653 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:32.653 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:32.653 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:32.653 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:32.653 Looking for driver=uio_pci_generic 00:05:32.653 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:32.653 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:32.653 11:47:30 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:32.653 11:47:30 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.653 11:47:30 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:32.912 11:47:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:32.912 11:47:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # continue 00:05:32.912 11:47:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.478 11:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.478 11:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:33.478 11:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.478 11:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.478 11:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:33.478 11:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.740 11:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.740 11:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:33.740 11:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.740 11:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:33.740 11:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:33.740 11:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:33.740 11:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:33.740 11:47:32 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:33.740 11:47:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:33.740 11:47:32 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:40.309 00:05:40.309 real 0m7.715s 00:05:40.309 user 0m0.969s 00:05:40.309 sys 0m1.896s 00:05:40.309 11:47:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.309 ************************************ 00:05:40.309 END TEST guess_driver 00:05:40.309 ************************************ 00:05:40.309 11:47:38 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:40.309 ************************************ 00:05:40.309 END TEST driver 00:05:40.309 ************************************ 00:05:40.309 00:05:40.309 real 0m14.072s 00:05:40.309 user 0m1.412s 00:05:40.309 sys 0m2.992s 00:05:40.309 11:47:38 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:40.309 11:47:38 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:40.309 11:47:38 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:40.309 11:47:38 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:40.309 11:47:38 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:40.309 11:47:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:40.309 ************************************ 00:05:40.309 START TEST devices 00:05:40.309 ************************************ 00:05:40.309 11:47:38 setup.sh.devices -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:40.309 * Looking for test storage... 00:05:40.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:40.309 11:47:38 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:40.309 11:47:38 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:40.309 11:47:38 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:40.309 11:47:38 setup.sh.devices -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:05:41.688 11:47:40 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:41.688 No valid GPT data, bailing 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:41.688 11:47:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:41.688 11:47:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:41.688 11:47:40 setup.sh.devices -- setup/common.sh@80 -- # echo 5368709120 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:41.688 No valid GPT data, bailing 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:41.688 11:47:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:41.688 11:47:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:41.688 11:47:40 setup.sh.devices -- setup/common.sh@80 -- # echo 6343335936 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:05:41.688 No valid GPT data, bailing 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:05:41.688 11:47:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n1 00:05:41.688 11:47:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:05:41.688 11:47:40 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n2 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:41.688 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n2 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n2 pt 00:05:41.688 11:47:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n2 00:05:41.948 No valid GPT data, bailing 00:05:41.948 11:47:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:05:41.948 11:47:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:41.948 11:47:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n2 00:05:41.948 11:47:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n2 00:05:41.948 11:47:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n2 ]] 00:05:41.948 11:47:40 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2n3 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme2 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:12.0 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\2\.\0* ]] 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme2n3 00:05:41.948 11:47:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme2n3 pt 00:05:41.948 11:47:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n3 00:05:41.948 No valid GPT data, bailing 00:05:41.948 11:47:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:05:41.948 11:47:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:41.948 11:47:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n3 00:05:41.948 11:47:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme2n3 00:05:41.948 11:47:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n3 ]] 00:05:41.948 11:47:40 setup.sh.devices -- setup/common.sh@80 -- # echo 4294967296 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:12.0 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme3 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:00:13.0 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\3\.\0* ]] 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:05:41.948 11:47:40 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme3n1 pt 00:05:41.948 11:47:40 setup.sh.devices -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:05:41.948 No valid GPT data, bailing 00:05:41.948 11:47:40 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:05:41.948 11:47:40 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:41.948 11:47:40 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:05:41.948 11:47:40 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme3n1 00:05:41.948 11:47:40 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:05:41.948 11:47:40 setup.sh.devices -- setup/common.sh@80 -- # echo 1073741824 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:41.948 11:47:40 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:41.948 11:47:40 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:41.948 11:47:40 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:41.948 11:47:40 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:41.948 ************************************ 00:05:41.948 START TEST nvme_mount 00:05:41.948 ************************************ 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:41.948 11:47:40 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:43.328 Creating new GPT entries in memory. 00:05:43.328 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:43.328 other utilities. 00:05:43.328 11:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:43.328 11:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:43.328 11:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:43.328 11:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:43.328 11:47:41 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:44.272 Creating new GPT entries in memory. 00:05:44.272 The operation has completed successfully. 00:05:44.272 11:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:44.272 11:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:44.272 11:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 71950 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:44.273 11:47:42 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:44.531 11:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.531 11:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:44.531 11:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:44.531 11:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.531 11:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.531 11:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.531 11:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.531 11:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.790 11:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.790 11:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:44.790 11:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:44.790 11:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.049 11:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.049 11:47:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.308 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:45.308 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:45.308 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:45.308 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:45.308 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:45.308 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:45.308 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:45.308 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:45.308 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:45.308 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:45.308 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:45.308 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:45.308 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:45.568 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:45.568 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:45.568 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:45.568 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:45.568 11:47:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:45.827 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.827 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:45.827 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:45.827 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:45.827 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:45.827 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.086 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:46.086 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.345 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:46.346 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.346 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:46.346 11:47:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.605 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:46.605 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.863 11:47:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:47.122 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.122 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:47.122 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:47.122 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.122 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.122 11:47:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.381 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.381 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.640 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.640 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.640 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.640 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.898 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.898 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.187 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:48.187 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:48.187 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:48.187 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:48.187 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.187 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:48.187 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:48.187 11:47:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:48.187 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:48.187 00:05:48.187 real 0m6.110s 00:05:48.187 user 0m1.548s 00:05:48.187 sys 0m2.250s 00:05:48.187 11:47:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:48.187 11:47:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:48.187 ************************************ 00:05:48.187 END TEST nvme_mount 00:05:48.187 ************************************ 00:05:48.187 11:47:46 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:48.187 11:47:46 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:48.187 11:47:46 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:48.187 11:47:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:48.187 ************************************ 00:05:48.187 START TEST dm_mount 00:05:48.187 ************************************ 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:48.187 11:47:46 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:49.123 Creating new GPT entries in memory. 00:05:49.123 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:49.123 other utilities. 00:05:49.123 11:47:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:49.123 11:47:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:49.123 11:47:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:49.123 11:47:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:49.123 11:47:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:50.498 Creating new GPT entries in memory. 00:05:50.498 The operation has completed successfully. 00:05:50.498 11:47:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:50.498 11:47:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:50.498 11:47:49 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:50.498 11:47:49 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:50.498 11:47:49 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:51.434 The operation has completed successfully. 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 72586 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:51.434 11:47:50 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:51.692 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.692 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:51.692 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:51.692 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.692 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.692 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.950 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.950 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.950 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.950 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.950 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.950 11:47:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.515 11:47:51 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:53.081 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.081 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:53.081 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:53.081 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.081 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.081 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.081 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.081 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.340 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.340 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.340 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:12.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.340 11:47:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.599 11:47:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:13.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:53.599 11:47:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:53.857 11:47:52 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:53.857 11:47:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:53.857 11:47:52 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:53.857 11:47:52 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:53.857 11:47:52 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:53.858 11:47:52 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:53.858 11:47:52 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:53.858 11:47:52 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:53.858 11:47:52 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:53.858 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:53.858 11:47:52 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:53.858 11:47:52 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:53.858 00:05:53.858 real 0m5.685s 00:05:53.858 user 0m1.047s 00:05:53.858 sys 0m1.567s 00:05:53.858 11:47:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:53.858 11:47:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:53.858 ************************************ 00:05:53.858 END TEST dm_mount 00:05:53.858 ************************************ 00:05:53.858 11:47:52 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:53.858 11:47:52 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:53.858 11:47:52 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:53.858 11:47:52 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:53.858 11:47:52 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:53.858 11:47:52 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:53.858 11:47:52 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:54.121 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:54.121 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:54.121 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:54.121 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:54.121 11:47:52 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:54.121 11:47:52 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:54.121 11:47:52 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:54.121 11:47:52 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:54.121 11:47:52 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:54.121 11:47:52 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:54.121 11:47:52 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:54.383 ************************************ 00:05:54.383 END TEST devices 00:05:54.383 ************************************ 00:05:54.383 00:05:54.383 real 0m14.204s 00:05:54.383 user 0m3.566s 00:05:54.383 sys 0m4.973s 00:05:54.383 11:47:52 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.383 11:47:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:54.383 00:05:54.383 real 0m50.011s 00:05:54.383 user 0m11.758s 00:05:54.383 sys 0m18.249s 00:05:54.383 11:47:53 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.383 11:47:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:54.383 ************************************ 00:05:54.383 END TEST setup.sh 00:05:54.383 ************************************ 00:05:54.383 11:47:53 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:54.960 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:55.526 Hugepages 00:05:55.526 node hugesize free / total 00:05:55.526 node0 1048576kB 0 / 0 00:05:55.526 node0 2048kB 2048 / 2048 00:05:55.526 00:05:55.526 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:55.526 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:55.783 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:55.783 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:56.041 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:05:56.041 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:05:56.041 11:47:54 -- spdk/autotest.sh@130 -- # uname -s 00:05:56.041 11:47:54 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:56.041 11:47:54 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:56.041 11:47:54 -- common/autotest_common.sh@1527 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:56.608 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:57.542 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:05:57.542 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:57.542 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:57.542 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:05:57.542 11:47:56 -- common/autotest_common.sh@1528 -- # sleep 1 00:05:58.491 11:47:57 -- common/autotest_common.sh@1529 -- # bdfs=() 00:05:58.491 11:47:57 -- common/autotest_common.sh@1529 -- # local bdfs 00:05:58.491 11:47:57 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:05:58.491 11:47:57 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:05:58.491 11:47:57 -- common/autotest_common.sh@1509 -- # bdfs=() 00:05:58.491 11:47:57 -- common/autotest_common.sh@1509 -- # local bdfs 00:05:58.491 11:47:57 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:58.491 11:47:57 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:58.491 11:47:57 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:05:58.750 11:47:57 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:05:58.750 11:47:57 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:05:58.750 11:47:57 -- common/autotest_common.sh@1532 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:59.008 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:59.267 Waiting for block devices as requested 00:05:59.526 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:59.526 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:59.526 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:05:59.784 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:05.076 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:05.076 11:48:03 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:05.076 11:48:03 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:05.076 11:48:03 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:05.076 11:48:03 -- common/autotest_common.sh@1498 -- # grep 0000:00:10.0/nvme/nvme 00:06:05.076 11:48:03 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:05.076 11:48:03 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:05.076 11:48:03 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:05.076 11:48:03 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme1 00:06:05.076 11:48:03 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme1 00:06:05.076 11:48:03 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme1 ]] 00:06:05.076 11:48:03 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme1 00:06:05.076 11:48:03 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:05.076 11:48:03 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:05.076 11:48:03 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:05.076 11:48:03 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:05.076 11:48:03 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:05.076 11:48:03 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme1 00:06:05.076 11:48:03 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:05.076 11:48:03 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:05.076 11:48:03 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:05.076 11:48:03 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:05.076 11:48:03 -- common/autotest_common.sh@1553 -- # continue 00:06:05.076 11:48:03 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:05.076 11:48:03 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:05.076 11:48:03 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:05.076 11:48:03 -- common/autotest_common.sh@1498 -- # grep 0000:00:11.0/nvme/nvme 00:06:05.076 11:48:03 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:05.076 11:48:03 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:05.076 11:48:03 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:05.076 11:48:03 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:06:05.076 11:48:03 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:06:05.076 11:48:03 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:06:05.076 11:48:03 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:06:05.076 11:48:03 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:05.076 11:48:03 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:05.076 11:48:03 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:05.076 11:48:03 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:05.076 11:48:03 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:05.076 11:48:03 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:06:05.076 11:48:03 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:05.076 11:48:03 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:05.076 11:48:03 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:05.076 11:48:03 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:05.076 11:48:03 -- common/autotest_common.sh@1553 -- # continue 00:06:05.077 11:48:03 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:05.077 11:48:03 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:06:05.077 11:48:03 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:05.077 11:48:03 -- common/autotest_common.sh@1498 -- # grep 0000:00:12.0/nvme/nvme 00:06:05.077 11:48:03 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:05.077 11:48:03 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:06:05.077 11:48:03 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:06:05.077 11:48:03 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme2 00:06:05.077 11:48:03 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme2 00:06:05.077 11:48:03 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme2 ]] 00:06:05.077 11:48:03 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme2 00:06:05.077 11:48:03 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:05.077 11:48:03 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:05.077 11:48:03 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:05.077 11:48:03 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:05.077 11:48:03 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:05.077 11:48:03 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme2 00:06:05.077 11:48:03 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:05.077 11:48:03 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:05.077 11:48:03 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:05.077 11:48:03 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:05.077 11:48:03 -- common/autotest_common.sh@1553 -- # continue 00:06:05.077 11:48:03 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:06:05.077 11:48:03 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:06:05.077 11:48:03 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:06:05.077 11:48:03 -- common/autotest_common.sh@1498 -- # grep 0000:00:13.0/nvme/nvme 00:06:05.077 11:48:03 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:05.077 11:48:03 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:06:05.077 11:48:03 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:06:05.077 11:48:03 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme3 00:06:05.077 11:48:03 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme3 00:06:05.077 11:48:03 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme3 ]] 00:06:05.077 11:48:03 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme3 00:06:05.077 11:48:03 -- common/autotest_common.sh@1541 -- # grep oacs 00:06:05.077 11:48:03 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:06:05.077 11:48:03 -- common/autotest_common.sh@1541 -- # oacs=' 0x12a' 00:06:05.077 11:48:03 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:06:05.077 11:48:03 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:06:05.077 11:48:03 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme3 00:06:05.077 11:48:03 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:06:05.077 11:48:03 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:06:05.077 11:48:03 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:06:05.077 11:48:03 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:06:05.077 11:48:03 -- common/autotest_common.sh@1553 -- # continue 00:06:05.077 11:48:03 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:06:05.077 11:48:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.077 11:48:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.077 11:48:03 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:06:05.077 11:48:03 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:05.077 11:48:03 -- common/autotest_common.sh@10 -- # set +x 00:06:05.077 11:48:03 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:05.642 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:06.575 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:06.575 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:06.575 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:06.575 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:06.575 11:48:05 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:06.575 11:48:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:06.575 11:48:05 -- common/autotest_common.sh@10 -- # set +x 00:06:06.575 11:48:05 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:06.575 11:48:05 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:06:06.575 11:48:05 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:06:06.575 11:48:05 -- common/autotest_common.sh@1573 -- # bdfs=() 00:06:06.575 11:48:05 -- common/autotest_common.sh@1573 -- # local bdfs 00:06:06.575 11:48:05 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:06:06.575 11:48:05 -- common/autotest_common.sh@1509 -- # bdfs=() 00:06:06.575 11:48:05 -- common/autotest_common.sh@1509 -- # local bdfs 00:06:06.575 11:48:05 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:06.575 11:48:05 -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:06.575 11:48:05 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:06:06.833 11:48:05 -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:06:06.833 11:48:05 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:06:06.833 11:48:05 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:06.833 11:48:05 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:06.833 11:48:05 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:06.833 11:48:05 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:06.833 11:48:05 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:06.833 11:48:05 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:06.833 11:48:05 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:06.833 11:48:05 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:06.833 11:48:05 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:06.833 11:48:05 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:06:06.833 11:48:05 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:06.833 11:48:05 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:06.833 11:48:05 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:06:06.833 11:48:05 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:06:06.833 11:48:05 -- common/autotest_common.sh@1576 -- # device=0x0010 00:06:06.833 11:48:05 -- common/autotest_common.sh@1577 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:06.833 11:48:05 -- common/autotest_common.sh@1582 -- # printf '%s\n' 00:06:06.833 11:48:05 -- common/autotest_common.sh@1588 -- # [[ -z '' ]] 00:06:06.833 11:48:05 -- common/autotest_common.sh@1589 -- # return 0 00:06:06.833 11:48:05 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:06.833 11:48:05 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:06.833 11:48:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:06.833 11:48:05 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:06.833 11:48:05 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:06.833 11:48:05 -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:06.833 11:48:05 -- common/autotest_common.sh@10 -- # set +x 00:06:06.833 11:48:05 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:06.833 11:48:05 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:06.833 11:48:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.833 11:48:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.833 11:48:05 -- common/autotest_common.sh@10 -- # set +x 00:06:06.833 ************************************ 00:06:06.833 START TEST env 00:06:06.833 ************************************ 00:06:06.833 11:48:05 env -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:06.833 * Looking for test storage... 00:06:06.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:06.833 11:48:05 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:06.833 11:48:05 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:06.833 11:48:05 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.833 11:48:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:06.833 ************************************ 00:06:06.833 START TEST env_memory 00:06:06.833 ************************************ 00:06:06.833 11:48:05 env.env_memory -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:06.833 00:06:06.833 00:06:06.833 CUnit - A unit testing framework for C - Version 2.1-3 00:06:06.833 http://cunit.sourceforge.net/ 00:06:06.833 00:06:06.833 00:06:06.833 Suite: memory 00:06:07.091 Test: alloc and free memory map ...[2024-07-21 11:48:05.712850] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:07.091 passed 00:06:07.091 Test: mem map translation ...[2024-07-21 11:48:05.751295] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:07.091 [2024-07-21 11:48:05.751377] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:07.091 [2024-07-21 11:48:05.751489] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:07.091 [2024-07-21 11:48:05.751542] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:07.091 passed 00:06:07.091 Test: mem map registration ...[2024-07-21 11:48:05.816240] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:07.092 [2024-07-21 11:48:05.816370] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:07.092 passed 00:06:07.092 Test: mem map adjacent registrations ...passed 00:06:07.092 00:06:07.092 Run Summary: Type Total Ran Passed Failed Inactive 00:06:07.092 suites 1 1 n/a 0 0 00:06:07.092 tests 4 4 4 0 0 00:06:07.092 asserts 152 152 152 0 n/a 00:06:07.092 00:06:07.092 Elapsed time = 0.221 seconds 00:06:07.092 00:06:07.092 real 0m0.271s 00:06:07.092 user 0m0.234s 00:06:07.092 sys 0m0.027s 00:06:07.092 11:48:05 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:07.092 11:48:05 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:07.092 ************************************ 00:06:07.092 END TEST env_memory 00:06:07.092 ************************************ 00:06:07.350 11:48:05 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:07.350 11:48:05 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:07.350 11:48:05 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:07.350 11:48:05 env -- common/autotest_common.sh@10 -- # set +x 00:06:07.350 ************************************ 00:06:07.350 START TEST env_vtophys 00:06:07.350 ************************************ 00:06:07.350 11:48:05 env.env_vtophys -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:07.350 EAL: lib.eal log level changed from notice to debug 00:06:07.350 EAL: Detected lcore 0 as core 0 on socket 0 00:06:07.350 EAL: Detected lcore 1 as core 0 on socket 0 00:06:07.350 EAL: Detected lcore 2 as core 0 on socket 0 00:06:07.350 EAL: Detected lcore 3 as core 0 on socket 0 00:06:07.350 EAL: Detected lcore 4 as core 0 on socket 0 00:06:07.350 EAL: Detected lcore 5 as core 0 on socket 0 00:06:07.350 EAL: Detected lcore 6 as core 0 on socket 0 00:06:07.350 EAL: Detected lcore 7 as core 0 on socket 0 00:06:07.350 EAL: Detected lcore 8 as core 0 on socket 0 00:06:07.350 EAL: Detected lcore 9 as core 0 on socket 0 00:06:07.350 EAL: Maximum logical cores by configuration: 128 00:06:07.350 EAL: Detected CPU lcores: 10 00:06:07.350 EAL: Detected NUMA nodes: 1 00:06:07.350 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:06:07.350 EAL: Detected shared linkage of DPDK 00:06:07.350 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:06:07.350 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:06:07.350 EAL: Registered [vdev] bus. 00:06:07.350 EAL: bus.vdev log level changed from disabled to notice 00:06:07.350 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:06:07.350 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:06:07.350 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:07.350 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:07.350 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:06:07.350 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:06:07.350 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:06:07.350 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:06:07.350 EAL: No shared files mode enabled, IPC will be disabled 00:06:07.350 EAL: No shared files mode enabled, IPC is disabled 00:06:07.350 EAL: Selected IOVA mode 'PA' 00:06:07.350 EAL: Probing VFIO support... 00:06:07.350 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:07.350 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:07.350 EAL: Ask a virtual area of 0x2e000 bytes 00:06:07.350 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:07.350 EAL: Setting up physically contiguous memory... 00:06:07.350 EAL: Setting maximum number of open files to 524288 00:06:07.350 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:07.350 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:07.350 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.350 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:07.350 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.350 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.350 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:07.350 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:07.350 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.350 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:07.350 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.350 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.350 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:07.350 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:07.350 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.350 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:07.350 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.350 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.350 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:07.350 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:07.350 EAL: Ask a virtual area of 0x61000 bytes 00:06:07.350 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:07.350 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:07.350 EAL: Ask a virtual area of 0x400000000 bytes 00:06:07.350 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:07.350 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:07.350 EAL: Hugepages will be freed exactly as allocated. 00:06:07.350 EAL: No shared files mode enabled, IPC is disabled 00:06:07.350 EAL: No shared files mode enabled, IPC is disabled 00:06:07.350 EAL: TSC frequency is ~2290000 KHz 00:06:07.350 EAL: Main lcore 0 is ready (tid=7f76549c3a40;cpuset=[0]) 00:06:07.350 EAL: Trying to obtain current memory policy. 00:06:07.350 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.350 EAL: Restoring previous memory policy: 0 00:06:07.350 EAL: request: mp_malloc_sync 00:06:07.350 EAL: No shared files mode enabled, IPC is disabled 00:06:07.350 EAL: Heap on socket 0 was expanded by 2MB 00:06:07.350 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:07.350 EAL: No shared files mode enabled, IPC is disabled 00:06:07.350 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:07.350 EAL: Mem event callback 'spdk:(nil)' registered 00:06:07.350 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:07.350 00:06:07.350 00:06:07.350 CUnit - A unit testing framework for C - Version 2.1-3 00:06:07.350 http://cunit.sourceforge.net/ 00:06:07.350 00:06:07.350 00:06:07.350 Suite: components_suite 00:06:07.916 Test: vtophys_malloc_test ...passed 00:06:07.916 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:07.916 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.916 EAL: Restoring previous memory policy: 4 00:06:07.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.916 EAL: request: mp_malloc_sync 00:06:07.916 EAL: No shared files mode enabled, IPC is disabled 00:06:07.916 EAL: Heap on socket 0 was expanded by 4MB 00:06:07.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.916 EAL: request: mp_malloc_sync 00:06:07.916 EAL: No shared files mode enabled, IPC is disabled 00:06:07.916 EAL: Heap on socket 0 was shrunk by 4MB 00:06:07.916 EAL: Trying to obtain current memory policy. 00:06:07.916 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.916 EAL: Restoring previous memory policy: 4 00:06:07.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.916 EAL: request: mp_malloc_sync 00:06:07.916 EAL: No shared files mode enabled, IPC is disabled 00:06:07.916 EAL: Heap on socket 0 was expanded by 6MB 00:06:07.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.916 EAL: request: mp_malloc_sync 00:06:07.916 EAL: No shared files mode enabled, IPC is disabled 00:06:07.916 EAL: Heap on socket 0 was shrunk by 6MB 00:06:07.916 EAL: Trying to obtain current memory policy. 00:06:07.916 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.916 EAL: Restoring previous memory policy: 4 00:06:07.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.916 EAL: request: mp_malloc_sync 00:06:07.916 EAL: No shared files mode enabled, IPC is disabled 00:06:07.916 EAL: Heap on socket 0 was expanded by 10MB 00:06:07.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.916 EAL: request: mp_malloc_sync 00:06:07.916 EAL: No shared files mode enabled, IPC is disabled 00:06:07.916 EAL: Heap on socket 0 was shrunk by 10MB 00:06:07.916 EAL: Trying to obtain current memory policy. 00:06:07.916 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.916 EAL: Restoring previous memory policy: 4 00:06:07.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.916 EAL: request: mp_malloc_sync 00:06:07.916 EAL: No shared files mode enabled, IPC is disabled 00:06:07.916 EAL: Heap on socket 0 was expanded by 18MB 00:06:07.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.916 EAL: request: mp_malloc_sync 00:06:07.916 EAL: No shared files mode enabled, IPC is disabled 00:06:07.916 EAL: Heap on socket 0 was shrunk by 18MB 00:06:07.916 EAL: Trying to obtain current memory policy. 00:06:07.916 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.916 EAL: Restoring previous memory policy: 4 00:06:07.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.916 EAL: request: mp_malloc_sync 00:06:07.916 EAL: No shared files mode enabled, IPC is disabled 00:06:07.916 EAL: Heap on socket 0 was expanded by 34MB 00:06:07.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.916 EAL: request: mp_malloc_sync 00:06:07.916 EAL: No shared files mode enabled, IPC is disabled 00:06:07.916 EAL: Heap on socket 0 was shrunk by 34MB 00:06:07.916 EAL: Trying to obtain current memory policy. 00:06:07.916 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.916 EAL: Restoring previous memory policy: 4 00:06:07.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.916 EAL: request: mp_malloc_sync 00:06:07.916 EAL: No shared files mode enabled, IPC is disabled 00:06:07.916 EAL: Heap on socket 0 was expanded by 66MB 00:06:07.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.916 EAL: request: mp_malloc_sync 00:06:07.916 EAL: No shared files mode enabled, IPC is disabled 00:06:07.916 EAL: Heap on socket 0 was shrunk by 66MB 00:06:07.916 EAL: Trying to obtain current memory policy. 00:06:07.916 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.916 EAL: Restoring previous memory policy: 4 00:06:07.916 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.916 EAL: request: mp_malloc_sync 00:06:07.916 EAL: No shared files mode enabled, IPC is disabled 00:06:07.916 EAL: Heap on socket 0 was expanded by 130MB 00:06:07.917 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.917 EAL: request: mp_malloc_sync 00:06:07.917 EAL: No shared files mode enabled, IPC is disabled 00:06:07.917 EAL: Heap on socket 0 was shrunk by 130MB 00:06:07.917 EAL: Trying to obtain current memory policy. 00:06:07.917 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:07.917 EAL: Restoring previous memory policy: 4 00:06:07.917 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.917 EAL: request: mp_malloc_sync 00:06:07.917 EAL: No shared files mode enabled, IPC is disabled 00:06:07.917 EAL: Heap on socket 0 was expanded by 258MB 00:06:07.917 EAL: Calling mem event callback 'spdk:(nil)' 00:06:07.917 EAL: request: mp_malloc_sync 00:06:07.917 EAL: No shared files mode enabled, IPC is disabled 00:06:07.917 EAL: Heap on socket 0 was shrunk by 258MB 00:06:07.917 EAL: Trying to obtain current memory policy. 00:06:07.917 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.175 EAL: Restoring previous memory policy: 4 00:06:08.175 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.175 EAL: request: mp_malloc_sync 00:06:08.175 EAL: No shared files mode enabled, IPC is disabled 00:06:08.175 EAL: Heap on socket 0 was expanded by 514MB 00:06:08.175 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.175 EAL: request: mp_malloc_sync 00:06:08.175 EAL: No shared files mode enabled, IPC is disabled 00:06:08.175 EAL: Heap on socket 0 was shrunk by 514MB 00:06:08.175 EAL: Trying to obtain current memory policy. 00:06:08.175 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:08.433 EAL: Restoring previous memory policy: 4 00:06:08.433 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.433 EAL: request: mp_malloc_sync 00:06:08.433 EAL: No shared files mode enabled, IPC is disabled 00:06:08.433 EAL: Heap on socket 0 was expanded by 1026MB 00:06:08.698 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.971 passed 00:06:08.971 00:06:08.971 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.971 suites 1 1 n/a 0 0 00:06:08.971 tests 2 2 2 0 0 00:06:08.971 asserts 5330 5330 5330 0 n/a 00:06:08.971 00:06:08.972 Elapsed time = 1.355 seconds 00:06:08.972 EAL: request: mp_malloc_sync 00:06:08.972 EAL: No shared files mode enabled, IPC is disabled 00:06:08.972 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:08.972 EAL: Calling mem event callback 'spdk:(nil)' 00:06:08.972 EAL: request: mp_malloc_sync 00:06:08.972 EAL: No shared files mode enabled, IPC is disabled 00:06:08.972 EAL: Heap on socket 0 was shrunk by 2MB 00:06:08.972 EAL: No shared files mode enabled, IPC is disabled 00:06:08.972 EAL: No shared files mode enabled, IPC is disabled 00:06:08.972 EAL: No shared files mode enabled, IPC is disabled 00:06:08.972 00:06:08.972 real 0m1.612s 00:06:08.972 user 0m0.777s 00:06:08.972 sys 0m0.703s 00:06:08.972 11:48:07 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.972 11:48:07 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:08.972 ************************************ 00:06:08.972 END TEST env_vtophys 00:06:08.972 ************************************ 00:06:08.972 11:48:07 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:08.972 11:48:07 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:08.972 11:48:07 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.972 11:48:07 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.972 ************************************ 00:06:08.972 START TEST env_pci 00:06:08.972 ************************************ 00:06:08.972 11:48:07 env.env_pci -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:08.972 00:06:08.972 00:06:08.972 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.972 http://cunit.sourceforge.net/ 00:06:08.972 00:06:08.972 00:06:08.972 Suite: pci 00:06:08.972 Test: pci_hook ...[2024-07-21 11:48:07.694766] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 74370 has claimed it 00:06:08.972 passed 00:06:08.972 00:06:08.972 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.972 suites 1 1 n/a 0 0 00:06:08.972 tests 1 1 1 0 0 00:06:08.972 asserts 25 25 25 0 n/a 00:06:08.972 00:06:08.972 Elapsed time = 0.005 secondsEAL: Cannot find device (10000:00:01.0) 00:06:08.972 EAL: Failed to attach device on primary process 00:06:08.972 00:06:08.972 00:06:08.972 real 0m0.091s 00:06:08.972 user 0m0.036s 00:06:08.972 sys 0m0.054s 00:06:08.972 11:48:07 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.972 11:48:07 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:08.972 ************************************ 00:06:08.972 END TEST env_pci 00:06:08.972 ************************************ 00:06:08.972 11:48:07 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:08.972 11:48:07 env -- env/env.sh@15 -- # uname 00:06:08.972 11:48:07 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:08.972 11:48:07 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:08.972 11:48:07 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:08.972 11:48:07 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:06:08.972 11:48:07 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.972 11:48:07 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.972 ************************************ 00:06:08.972 START TEST env_dpdk_post_init 00:06:08.972 ************************************ 00:06:08.972 11:48:07 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:09.230 EAL: Detected CPU lcores: 10 00:06:09.230 EAL: Detected NUMA nodes: 1 00:06:09.230 EAL: Detected shared linkage of DPDK 00:06:09.230 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:09.230 EAL: Selected IOVA mode 'PA' 00:06:09.230 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:09.230 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:09.230 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:09.230 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:06:09.230 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:06:09.230 Starting DPDK initialization... 00:06:09.230 Starting SPDK post initialization... 00:06:09.230 SPDK NVMe probe 00:06:09.230 Attaching to 0000:00:10.0 00:06:09.230 Attaching to 0000:00:11.0 00:06:09.230 Attaching to 0000:00:12.0 00:06:09.230 Attaching to 0000:00:13.0 00:06:09.230 Attached to 0000:00:10.0 00:06:09.230 Attached to 0000:00:11.0 00:06:09.230 Attached to 0000:00:13.0 00:06:09.230 Attached to 0000:00:12.0 00:06:09.230 Cleaning up... 00:06:09.230 00:06:09.230 real 0m0.262s 00:06:09.230 user 0m0.073s 00:06:09.230 sys 0m0.093s 00:06:09.230 11:48:08 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.230 11:48:08 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:09.230 ************************************ 00:06:09.230 END TEST env_dpdk_post_init 00:06:09.230 ************************************ 00:06:09.488 11:48:08 env -- env/env.sh@26 -- # uname 00:06:09.488 11:48:08 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:09.488 11:48:08 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:09.488 11:48:08 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.488 11:48:08 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.488 11:48:08 env -- common/autotest_common.sh@10 -- # set +x 00:06:09.488 ************************************ 00:06:09.488 START TEST env_mem_callbacks 00:06:09.488 ************************************ 00:06:09.488 11:48:08 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:09.488 EAL: Detected CPU lcores: 10 00:06:09.488 EAL: Detected NUMA nodes: 1 00:06:09.488 EAL: Detected shared linkage of DPDK 00:06:09.488 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:09.488 EAL: Selected IOVA mode 'PA' 00:06:09.488 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:09.488 00:06:09.488 00:06:09.488 CUnit - A unit testing framework for C - Version 2.1-3 00:06:09.488 http://cunit.sourceforge.net/ 00:06:09.488 00:06:09.488 00:06:09.488 Suite: memory 00:06:09.488 Test: test ... 00:06:09.488 register 0x200000200000 2097152 00:06:09.488 malloc 3145728 00:06:09.488 register 0x200000400000 4194304 00:06:09.488 buf 0x200000500000 len 3145728 PASSED 00:06:09.488 malloc 64 00:06:09.488 buf 0x2000004fff40 len 64 PASSED 00:06:09.488 malloc 4194304 00:06:09.488 register 0x200000800000 6291456 00:06:09.488 buf 0x200000a00000 len 4194304 PASSED 00:06:09.488 free 0x200000500000 3145728 00:06:09.488 free 0x2000004fff40 64 00:06:09.488 unregister 0x200000400000 4194304 PASSED 00:06:09.488 free 0x200000a00000 4194304 00:06:09.488 unregister 0x200000800000 6291456 PASSED 00:06:09.489 malloc 8388608 00:06:09.489 register 0x200000400000 10485760 00:06:09.489 buf 0x200000600000 len 8388608 PASSED 00:06:09.489 free 0x200000600000 8388608 00:06:09.489 unregister 0x200000400000 10485760 PASSED 00:06:09.489 passed 00:06:09.489 00:06:09.489 Run Summary: Type Total Ran Passed Failed Inactive 00:06:09.489 suites 1 1 n/a 0 0 00:06:09.489 tests 1 1 1 0 0 00:06:09.489 asserts 15 15 15 0 n/a 00:06:09.489 00:06:09.489 Elapsed time = 0.011 seconds 00:06:09.747 00:06:09.747 real 0m0.202s 00:06:09.747 user 0m0.038s 00:06:09.747 sys 0m0.062s 00:06:09.747 11:48:08 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.747 11:48:08 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:09.747 ************************************ 00:06:09.747 END TEST env_mem_callbacks 00:06:09.747 ************************************ 00:06:09.747 00:06:09.747 real 0m2.892s 00:06:09.747 user 0m1.301s 00:06:09.747 sys 0m1.261s 00:06:09.747 11:48:08 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.747 11:48:08 env -- common/autotest_common.sh@10 -- # set +x 00:06:09.747 ************************************ 00:06:09.747 END TEST env 00:06:09.747 ************************************ 00:06:09.747 11:48:08 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:09.747 11:48:08 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:09.747 11:48:08 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.747 11:48:08 -- common/autotest_common.sh@10 -- # set +x 00:06:09.747 ************************************ 00:06:09.747 START TEST rpc 00:06:09.747 ************************************ 00:06:09.747 11:48:08 rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:09.747 * Looking for test storage... 00:06:09.747 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:09.747 11:48:08 rpc -- rpc/rpc.sh@65 -- # spdk_pid=74489 00:06:09.747 11:48:08 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:09.747 11:48:08 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.747 11:48:08 rpc -- rpc/rpc.sh@67 -- # waitforlisten 74489 00:06:09.747 11:48:08 rpc -- common/autotest_common.sh@827 -- # '[' -z 74489 ']' 00:06:09.747 11:48:08 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.747 11:48:08 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:09.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.747 11:48:08 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.747 11:48:08 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:09.747 11:48:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.005 [2024-07-21 11:48:08.697666] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:10.005 [2024-07-21 11:48:08.697811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74489 ] 00:06:10.005 [2024-07-21 11:48:08.848979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.263 [2024-07-21 11:48:08.895584] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:10.263 [2024-07-21 11:48:08.895651] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 74489' to capture a snapshot of events at runtime. 00:06:10.263 [2024-07-21 11:48:08.895661] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:10.263 [2024-07-21 11:48:08.895671] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:10.263 [2024-07-21 11:48:08.895683] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid74489 for offline analysis/debug. 00:06:10.263 [2024-07-21 11:48:08.895720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.831 11:48:09 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:10.831 11:48:09 rpc -- common/autotest_common.sh@860 -- # return 0 00:06:10.831 11:48:09 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:10.831 11:48:09 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:10.831 11:48:09 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:10.831 11:48:09 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:10.831 11:48:09 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:10.831 11:48:09 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:10.831 11:48:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.831 ************************************ 00:06:10.831 START TEST rpc_integrity 00:06:10.831 ************************************ 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:10.831 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.831 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:10.831 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:10.831 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:10.831 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.831 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:10.831 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.831 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:10.831 { 00:06:10.831 "name": "Malloc0", 00:06:10.831 "aliases": [ 00:06:10.831 "d2ec0986-d2b0-4807-97e3-6c9eca3a12df" 00:06:10.831 ], 00:06:10.831 "product_name": "Malloc disk", 00:06:10.831 "block_size": 512, 00:06:10.831 "num_blocks": 16384, 00:06:10.831 "uuid": "d2ec0986-d2b0-4807-97e3-6c9eca3a12df", 00:06:10.831 "assigned_rate_limits": { 00:06:10.831 "rw_ios_per_sec": 0, 00:06:10.831 "rw_mbytes_per_sec": 0, 00:06:10.831 "r_mbytes_per_sec": 0, 00:06:10.831 "w_mbytes_per_sec": 0 00:06:10.831 }, 00:06:10.831 "claimed": false, 00:06:10.831 "zoned": false, 00:06:10.831 "supported_io_types": { 00:06:10.831 "read": true, 00:06:10.831 "write": true, 00:06:10.831 "unmap": true, 00:06:10.831 "write_zeroes": true, 00:06:10.831 "flush": true, 00:06:10.831 "reset": true, 00:06:10.831 "compare": false, 00:06:10.831 "compare_and_write": false, 00:06:10.831 "abort": true, 00:06:10.831 "nvme_admin": false, 00:06:10.831 "nvme_io": false 00:06:10.831 }, 00:06:10.831 "memory_domains": [ 00:06:10.831 { 00:06:10.831 "dma_device_id": "system", 00:06:10.831 "dma_device_type": 1 00:06:10.831 }, 00:06:10.831 { 00:06:10.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.831 "dma_device_type": 2 00:06:10.831 } 00:06:10.831 ], 00:06:10.831 "driver_specific": {} 00:06:10.831 } 00:06:10.831 ]' 00:06:10.831 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:10.831 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:10.831 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.831 [2024-07-21 11:48:09.616410] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:10.831 [2024-07-21 11:48:09.616487] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:10.831 [2024-07-21 11:48:09.616518] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:06:10.831 [2024-07-21 11:48:09.616537] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:10.831 [2024-07-21 11:48:09.618904] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:10.831 [2024-07-21 11:48:09.618945] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:10.831 Passthru0 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.831 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.831 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.831 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:10.831 { 00:06:10.831 "name": "Malloc0", 00:06:10.831 "aliases": [ 00:06:10.831 "d2ec0986-d2b0-4807-97e3-6c9eca3a12df" 00:06:10.831 ], 00:06:10.831 "product_name": "Malloc disk", 00:06:10.831 "block_size": 512, 00:06:10.831 "num_blocks": 16384, 00:06:10.831 "uuid": "d2ec0986-d2b0-4807-97e3-6c9eca3a12df", 00:06:10.831 "assigned_rate_limits": { 00:06:10.831 "rw_ios_per_sec": 0, 00:06:10.831 "rw_mbytes_per_sec": 0, 00:06:10.831 "r_mbytes_per_sec": 0, 00:06:10.831 "w_mbytes_per_sec": 0 00:06:10.831 }, 00:06:10.831 "claimed": true, 00:06:10.831 "claim_type": "exclusive_write", 00:06:10.831 "zoned": false, 00:06:10.831 "supported_io_types": { 00:06:10.831 "read": true, 00:06:10.831 "write": true, 00:06:10.831 "unmap": true, 00:06:10.831 "write_zeroes": true, 00:06:10.831 "flush": true, 00:06:10.831 "reset": true, 00:06:10.831 "compare": false, 00:06:10.831 "compare_and_write": false, 00:06:10.831 "abort": true, 00:06:10.831 "nvme_admin": false, 00:06:10.831 "nvme_io": false 00:06:10.831 }, 00:06:10.832 "memory_domains": [ 00:06:10.832 { 00:06:10.832 "dma_device_id": "system", 00:06:10.832 "dma_device_type": 1 00:06:10.832 }, 00:06:10.832 { 00:06:10.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.832 "dma_device_type": 2 00:06:10.832 } 00:06:10.832 ], 00:06:10.832 "driver_specific": {} 00:06:10.832 }, 00:06:10.832 { 00:06:10.832 "name": "Passthru0", 00:06:10.832 "aliases": [ 00:06:10.832 "a53b55e4-30ed-5d10-b006-7d2c4a3a467d" 00:06:10.832 ], 00:06:10.832 "product_name": "passthru", 00:06:10.832 "block_size": 512, 00:06:10.832 "num_blocks": 16384, 00:06:10.832 "uuid": "a53b55e4-30ed-5d10-b006-7d2c4a3a467d", 00:06:10.832 "assigned_rate_limits": { 00:06:10.832 "rw_ios_per_sec": 0, 00:06:10.832 "rw_mbytes_per_sec": 0, 00:06:10.832 "r_mbytes_per_sec": 0, 00:06:10.832 "w_mbytes_per_sec": 0 00:06:10.832 }, 00:06:10.832 "claimed": false, 00:06:10.832 "zoned": false, 00:06:10.832 "supported_io_types": { 00:06:10.832 "read": true, 00:06:10.832 "write": true, 00:06:10.832 "unmap": true, 00:06:10.832 "write_zeroes": true, 00:06:10.832 "flush": true, 00:06:10.832 "reset": true, 00:06:10.832 "compare": false, 00:06:10.832 "compare_and_write": false, 00:06:10.832 "abort": true, 00:06:10.832 "nvme_admin": false, 00:06:10.832 "nvme_io": false 00:06:10.832 }, 00:06:10.832 "memory_domains": [ 00:06:10.832 { 00:06:10.832 "dma_device_id": "system", 00:06:10.832 "dma_device_type": 1 00:06:10.832 }, 00:06:10.832 { 00:06:10.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.832 "dma_device_type": 2 00:06:10.832 } 00:06:10.832 ], 00:06:10.832 "driver_specific": { 00:06:10.832 "passthru": { 00:06:10.832 "name": "Passthru0", 00:06:10.832 "base_bdev_name": "Malloc0" 00:06:10.832 } 00:06:10.832 } 00:06:10.832 } 00:06:10.832 ]' 00:06:10.832 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:11.091 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:11.091 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:11.091 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.091 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.091 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.091 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:11.091 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.091 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.091 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.091 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:11.091 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.091 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.091 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.091 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:11.091 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:11.091 ************************************ 00:06:11.091 END TEST rpc_integrity 00:06:11.091 ************************************ 00:06:11.091 11:48:09 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:11.091 00:06:11.091 real 0m0.294s 00:06:11.091 user 0m0.181s 00:06:11.091 sys 0m0.043s 00:06:11.091 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.091 11:48:09 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.091 11:48:09 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:11.091 11:48:09 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.091 11:48:09 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.091 11:48:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.091 ************************************ 00:06:11.091 START TEST rpc_plugins 00:06:11.091 ************************************ 00:06:11.091 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:06:11.091 11:48:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:11.091 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.091 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:11.091 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.091 11:48:09 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:11.091 11:48:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:11.091 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.091 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:11.091 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.091 11:48:09 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:11.091 { 00:06:11.091 "name": "Malloc1", 00:06:11.091 "aliases": [ 00:06:11.091 "7dd605b4-9e2b-4bf2-9d9d-234dcabe3bf0" 00:06:11.091 ], 00:06:11.091 "product_name": "Malloc disk", 00:06:11.091 "block_size": 4096, 00:06:11.091 "num_blocks": 256, 00:06:11.091 "uuid": "7dd605b4-9e2b-4bf2-9d9d-234dcabe3bf0", 00:06:11.091 "assigned_rate_limits": { 00:06:11.091 "rw_ios_per_sec": 0, 00:06:11.091 "rw_mbytes_per_sec": 0, 00:06:11.091 "r_mbytes_per_sec": 0, 00:06:11.091 "w_mbytes_per_sec": 0 00:06:11.091 }, 00:06:11.091 "claimed": false, 00:06:11.091 "zoned": false, 00:06:11.091 "supported_io_types": { 00:06:11.091 "read": true, 00:06:11.091 "write": true, 00:06:11.091 "unmap": true, 00:06:11.091 "write_zeroes": true, 00:06:11.091 "flush": true, 00:06:11.091 "reset": true, 00:06:11.091 "compare": false, 00:06:11.091 "compare_and_write": false, 00:06:11.091 "abort": true, 00:06:11.091 "nvme_admin": false, 00:06:11.091 "nvme_io": false 00:06:11.091 }, 00:06:11.091 "memory_domains": [ 00:06:11.091 { 00:06:11.091 "dma_device_id": "system", 00:06:11.091 "dma_device_type": 1 00:06:11.091 }, 00:06:11.091 { 00:06:11.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.091 "dma_device_type": 2 00:06:11.091 } 00:06:11.091 ], 00:06:11.091 "driver_specific": {} 00:06:11.091 } 00:06:11.091 ]' 00:06:11.091 11:48:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:11.091 11:48:09 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:11.091 11:48:09 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:11.091 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.091 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:11.091 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.091 11:48:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:11.091 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.091 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:11.091 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.091 11:48:09 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:11.091 11:48:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:11.350 ************************************ 00:06:11.350 END TEST rpc_plugins 00:06:11.350 ************************************ 00:06:11.350 11:48:09 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:11.350 00:06:11.350 real 0m0.142s 00:06:11.350 user 0m0.085s 00:06:11.350 sys 0m0.020s 00:06:11.350 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.350 11:48:09 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:11.350 11:48:10 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:11.350 11:48:10 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.350 11:48:10 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.350 11:48:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.350 ************************************ 00:06:11.350 START TEST rpc_trace_cmd_test 00:06:11.350 ************************************ 00:06:11.350 11:48:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:06:11.350 11:48:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:11.350 11:48:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:11.350 11:48:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.350 11:48:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.350 11:48:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.350 11:48:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:11.350 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid74489", 00:06:11.350 "tpoint_group_mask": "0x8", 00:06:11.350 "iscsi_conn": { 00:06:11.350 "mask": "0x2", 00:06:11.350 "tpoint_mask": "0x0" 00:06:11.350 }, 00:06:11.350 "scsi": { 00:06:11.350 "mask": "0x4", 00:06:11.350 "tpoint_mask": "0x0" 00:06:11.350 }, 00:06:11.350 "bdev": { 00:06:11.350 "mask": "0x8", 00:06:11.350 "tpoint_mask": "0xffffffffffffffff" 00:06:11.350 }, 00:06:11.350 "nvmf_rdma": { 00:06:11.350 "mask": "0x10", 00:06:11.350 "tpoint_mask": "0x0" 00:06:11.350 }, 00:06:11.350 "nvmf_tcp": { 00:06:11.350 "mask": "0x20", 00:06:11.350 "tpoint_mask": "0x0" 00:06:11.350 }, 00:06:11.350 "ftl": { 00:06:11.350 "mask": "0x40", 00:06:11.350 "tpoint_mask": "0x0" 00:06:11.350 }, 00:06:11.350 "blobfs": { 00:06:11.350 "mask": "0x80", 00:06:11.351 "tpoint_mask": "0x0" 00:06:11.351 }, 00:06:11.351 "dsa": { 00:06:11.351 "mask": "0x200", 00:06:11.351 "tpoint_mask": "0x0" 00:06:11.351 }, 00:06:11.351 "thread": { 00:06:11.351 "mask": "0x400", 00:06:11.351 "tpoint_mask": "0x0" 00:06:11.351 }, 00:06:11.351 "nvme_pcie": { 00:06:11.351 "mask": "0x800", 00:06:11.351 "tpoint_mask": "0x0" 00:06:11.351 }, 00:06:11.351 "iaa": { 00:06:11.351 "mask": "0x1000", 00:06:11.351 "tpoint_mask": "0x0" 00:06:11.351 }, 00:06:11.351 "nvme_tcp": { 00:06:11.351 "mask": "0x2000", 00:06:11.351 "tpoint_mask": "0x0" 00:06:11.351 }, 00:06:11.351 "bdev_nvme": { 00:06:11.351 "mask": "0x4000", 00:06:11.351 "tpoint_mask": "0x0" 00:06:11.351 }, 00:06:11.351 "sock": { 00:06:11.351 "mask": "0x8000", 00:06:11.351 "tpoint_mask": "0x0" 00:06:11.351 } 00:06:11.351 }' 00:06:11.351 11:48:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:11.351 11:48:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:11.351 11:48:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:11.351 11:48:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:11.351 11:48:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:11.351 11:48:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:11.351 11:48:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:11.610 11:48:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:11.610 11:48:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:11.610 11:48:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:11.610 00:06:11.610 real 0m0.252s 00:06:11.610 user 0m0.207s 00:06:11.610 sys 0m0.032s 00:06:11.610 11:48:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.610 11:48:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:11.610 ************************************ 00:06:11.610 END TEST rpc_trace_cmd_test 00:06:11.610 ************************************ 00:06:11.610 11:48:10 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:11.610 11:48:10 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:11.610 11:48:10 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:11.610 11:48:10 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:11.610 11:48:10 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:11.610 11:48:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.610 ************************************ 00:06:11.610 START TEST rpc_daemon_integrity 00:06:11.610 ************************************ 00:06:11.610 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:06:11.610 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:11.610 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.610 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.610 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.610 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:11.610 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:11.610 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:11.610 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:11.611 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.611 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.611 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.611 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:11.611 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:11.611 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.611 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.611 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.611 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:11.611 { 00:06:11.611 "name": "Malloc2", 00:06:11.611 "aliases": [ 00:06:11.611 "6ef35297-7735-4f65-9621-3f7e4117e2e7" 00:06:11.611 ], 00:06:11.611 "product_name": "Malloc disk", 00:06:11.611 "block_size": 512, 00:06:11.611 "num_blocks": 16384, 00:06:11.611 "uuid": "6ef35297-7735-4f65-9621-3f7e4117e2e7", 00:06:11.611 "assigned_rate_limits": { 00:06:11.611 "rw_ios_per_sec": 0, 00:06:11.611 "rw_mbytes_per_sec": 0, 00:06:11.611 "r_mbytes_per_sec": 0, 00:06:11.611 "w_mbytes_per_sec": 0 00:06:11.611 }, 00:06:11.611 "claimed": false, 00:06:11.611 "zoned": false, 00:06:11.611 "supported_io_types": { 00:06:11.611 "read": true, 00:06:11.611 "write": true, 00:06:11.611 "unmap": true, 00:06:11.611 "write_zeroes": true, 00:06:11.611 "flush": true, 00:06:11.611 "reset": true, 00:06:11.611 "compare": false, 00:06:11.611 "compare_and_write": false, 00:06:11.611 "abort": true, 00:06:11.611 "nvme_admin": false, 00:06:11.611 "nvme_io": false 00:06:11.611 }, 00:06:11.611 "memory_domains": [ 00:06:11.611 { 00:06:11.611 "dma_device_id": "system", 00:06:11.611 "dma_device_type": 1 00:06:11.611 }, 00:06:11.611 { 00:06:11.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.611 "dma_device_type": 2 00:06:11.611 } 00:06:11.611 ], 00:06:11.611 "driver_specific": {} 00:06:11.611 } 00:06:11.611 ]' 00:06:11.611 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:11.611 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:11.611 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:11.611 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.611 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.611 [2024-07-21 11:48:10.471370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:11.611 [2024-07-21 11:48:10.471437] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:11.611 [2024-07-21 11:48:10.471462] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:06:11.611 [2024-07-21 11:48:10.471475] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:11.611 [2024-07-21 11:48:10.473799] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:11.611 [2024-07-21 11:48:10.473867] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:11.870 Passthru0 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:11.870 { 00:06:11.870 "name": "Malloc2", 00:06:11.870 "aliases": [ 00:06:11.870 "6ef35297-7735-4f65-9621-3f7e4117e2e7" 00:06:11.870 ], 00:06:11.870 "product_name": "Malloc disk", 00:06:11.870 "block_size": 512, 00:06:11.870 "num_blocks": 16384, 00:06:11.870 "uuid": "6ef35297-7735-4f65-9621-3f7e4117e2e7", 00:06:11.870 "assigned_rate_limits": { 00:06:11.870 "rw_ios_per_sec": 0, 00:06:11.870 "rw_mbytes_per_sec": 0, 00:06:11.870 "r_mbytes_per_sec": 0, 00:06:11.870 "w_mbytes_per_sec": 0 00:06:11.870 }, 00:06:11.870 "claimed": true, 00:06:11.870 "claim_type": "exclusive_write", 00:06:11.870 "zoned": false, 00:06:11.870 "supported_io_types": { 00:06:11.870 "read": true, 00:06:11.870 "write": true, 00:06:11.870 "unmap": true, 00:06:11.870 "write_zeroes": true, 00:06:11.870 "flush": true, 00:06:11.870 "reset": true, 00:06:11.870 "compare": false, 00:06:11.870 "compare_and_write": false, 00:06:11.870 "abort": true, 00:06:11.870 "nvme_admin": false, 00:06:11.870 "nvme_io": false 00:06:11.870 }, 00:06:11.870 "memory_domains": [ 00:06:11.870 { 00:06:11.870 "dma_device_id": "system", 00:06:11.870 "dma_device_type": 1 00:06:11.870 }, 00:06:11.870 { 00:06:11.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.870 "dma_device_type": 2 00:06:11.870 } 00:06:11.870 ], 00:06:11.870 "driver_specific": {} 00:06:11.870 }, 00:06:11.870 { 00:06:11.870 "name": "Passthru0", 00:06:11.870 "aliases": [ 00:06:11.870 "c9436dd6-6ecf-5c2e-955f-54a3daa81f6b" 00:06:11.870 ], 00:06:11.870 "product_name": "passthru", 00:06:11.870 "block_size": 512, 00:06:11.870 "num_blocks": 16384, 00:06:11.870 "uuid": "c9436dd6-6ecf-5c2e-955f-54a3daa81f6b", 00:06:11.870 "assigned_rate_limits": { 00:06:11.870 "rw_ios_per_sec": 0, 00:06:11.870 "rw_mbytes_per_sec": 0, 00:06:11.870 "r_mbytes_per_sec": 0, 00:06:11.870 "w_mbytes_per_sec": 0 00:06:11.870 }, 00:06:11.870 "claimed": false, 00:06:11.870 "zoned": false, 00:06:11.870 "supported_io_types": { 00:06:11.870 "read": true, 00:06:11.870 "write": true, 00:06:11.870 "unmap": true, 00:06:11.870 "write_zeroes": true, 00:06:11.870 "flush": true, 00:06:11.870 "reset": true, 00:06:11.870 "compare": false, 00:06:11.870 "compare_and_write": false, 00:06:11.870 "abort": true, 00:06:11.870 "nvme_admin": false, 00:06:11.870 "nvme_io": false 00:06:11.870 }, 00:06:11.870 "memory_domains": [ 00:06:11.870 { 00:06:11.870 "dma_device_id": "system", 00:06:11.870 "dma_device_type": 1 00:06:11.870 }, 00:06:11.870 { 00:06:11.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:11.870 "dma_device_type": 2 00:06:11.870 } 00:06:11.870 ], 00:06:11.870 "driver_specific": { 00:06:11.870 "passthru": { 00:06:11.870 "name": "Passthru0", 00:06:11.870 "base_bdev_name": "Malloc2" 00:06:11.870 } 00:06:11.870 } 00:06:11.870 } 00:06:11.870 ]' 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:11.870 ************************************ 00:06:11.870 END TEST rpc_daemon_integrity 00:06:11.870 ************************************ 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:11.870 00:06:11.870 real 0m0.277s 00:06:11.870 user 0m0.166s 00:06:11.870 sys 0m0.043s 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:11.870 11:48:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:11.870 11:48:10 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:11.870 11:48:10 rpc -- rpc/rpc.sh@84 -- # killprocess 74489 00:06:11.870 11:48:10 rpc -- common/autotest_common.sh@946 -- # '[' -z 74489 ']' 00:06:11.870 11:48:10 rpc -- common/autotest_common.sh@950 -- # kill -0 74489 00:06:11.870 11:48:10 rpc -- common/autotest_common.sh@951 -- # uname 00:06:11.870 11:48:10 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:11.870 11:48:10 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74489 00:06:11.870 killing process with pid 74489 00:06:11.870 11:48:10 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:11.870 11:48:10 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:11.870 11:48:10 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74489' 00:06:11.870 11:48:10 rpc -- common/autotest_common.sh@965 -- # kill 74489 00:06:11.870 11:48:10 rpc -- common/autotest_common.sh@970 -- # wait 74489 00:06:12.436 00:06:12.436 real 0m2.582s 00:06:12.436 user 0m3.124s 00:06:12.436 sys 0m0.726s 00:06:12.436 11:48:11 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:12.436 ************************************ 00:06:12.436 END TEST rpc 00:06:12.436 ************************************ 00:06:12.436 11:48:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.436 11:48:11 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:12.436 11:48:11 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.436 11:48:11 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.436 11:48:11 -- common/autotest_common.sh@10 -- # set +x 00:06:12.436 ************************************ 00:06:12.436 START TEST skip_rpc 00:06:12.436 ************************************ 00:06:12.436 11:48:11 skip_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:12.436 * Looking for test storage... 00:06:12.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:12.436 11:48:11 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:12.436 11:48:11 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:12.436 11:48:11 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:12.436 11:48:11 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:12.436 11:48:11 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:12.436 11:48:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.436 ************************************ 00:06:12.436 START TEST skip_rpc 00:06:12.436 ************************************ 00:06:12.436 11:48:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:06:12.436 11:48:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=74677 00:06:12.436 11:48:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:12.437 11:48:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:12.437 11:48:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:12.695 [2024-07-21 11:48:11.314578] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:12.695 [2024-07-21 11:48:11.314698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74677 ] 00:06:12.695 [2024-07-21 11:48:11.474011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.695 [2024-07-21 11:48:11.523971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 74677 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 74677 ']' 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 74677 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74677 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74677' 00:06:17.973 killing process with pid 74677 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 74677 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 74677 00:06:17.973 00:06:17.973 real 0m5.436s 00:06:17.973 user 0m5.039s 00:06:17.973 sys 0m0.312s 00:06:17.973 ************************************ 00:06:17.973 END TEST skip_rpc 00:06:17.973 ************************************ 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:17.973 11:48:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.973 11:48:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:17.973 11:48:16 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:17.973 11:48:16 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:17.973 11:48:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.973 ************************************ 00:06:17.973 START TEST skip_rpc_with_json 00:06:17.973 ************************************ 00:06:17.973 11:48:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:06:17.973 11:48:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:17.973 11:48:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=74769 00:06:17.973 11:48:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.973 11:48:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.973 11:48:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 74769 00:06:17.973 11:48:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 74769 ']' 00:06:17.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.973 11:48:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.973 11:48:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:17.973 11:48:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.973 11:48:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:17.973 11:48:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:17.973 [2024-07-21 11:48:16.811595] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:17.973 [2024-07-21 11:48:16.811773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74769 ] 00:06:18.231 [2024-07-21 11:48:16.971283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.231 [2024-07-21 11:48:17.059141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.798 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:18.798 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:06:18.798 11:48:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:18.798 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.798 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.798 [2024-07-21 11:48:17.618931] nvmf_rpc.c:2558:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:18.798 request: 00:06:18.798 { 00:06:18.798 "trtype": "tcp", 00:06:18.798 "method": "nvmf_get_transports", 00:06:18.798 "req_id": 1 00:06:18.798 } 00:06:18.798 Got JSON-RPC error response 00:06:18.798 response: 00:06:18.798 { 00:06:18.798 "code": -19, 00:06:18.798 "message": "No such device" 00:06:18.798 } 00:06:18.798 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:18.798 11:48:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:18.798 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.798 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:18.798 [2024-07-21 11:48:17.631022] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:18.798 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:18.798 11:48:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:18.798 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:18.798 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:19.057 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:19.057 11:48:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:19.057 { 00:06:19.057 "subsystems": [ 00:06:19.057 { 00:06:19.057 "subsystem": "keyring", 00:06:19.057 "config": [] 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "subsystem": "iobuf", 00:06:19.057 "config": [ 00:06:19.057 { 00:06:19.057 "method": "iobuf_set_options", 00:06:19.057 "params": { 00:06:19.057 "small_pool_count": 8192, 00:06:19.057 "large_pool_count": 1024, 00:06:19.057 "small_bufsize": 8192, 00:06:19.057 "large_bufsize": 135168 00:06:19.057 } 00:06:19.057 } 00:06:19.057 ] 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "subsystem": "sock", 00:06:19.057 "config": [ 00:06:19.057 { 00:06:19.057 "method": "sock_set_default_impl", 00:06:19.057 "params": { 00:06:19.057 "impl_name": "posix" 00:06:19.057 } 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "method": "sock_impl_set_options", 00:06:19.057 "params": { 00:06:19.057 "impl_name": "ssl", 00:06:19.057 "recv_buf_size": 4096, 00:06:19.057 "send_buf_size": 4096, 00:06:19.057 "enable_recv_pipe": true, 00:06:19.057 "enable_quickack": false, 00:06:19.057 "enable_placement_id": 0, 00:06:19.057 "enable_zerocopy_send_server": true, 00:06:19.057 "enable_zerocopy_send_client": false, 00:06:19.057 "zerocopy_threshold": 0, 00:06:19.057 "tls_version": 0, 00:06:19.057 "enable_ktls": false 00:06:19.057 } 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "method": "sock_impl_set_options", 00:06:19.057 "params": { 00:06:19.057 "impl_name": "posix", 00:06:19.057 "recv_buf_size": 2097152, 00:06:19.057 "send_buf_size": 2097152, 00:06:19.057 "enable_recv_pipe": true, 00:06:19.057 "enable_quickack": false, 00:06:19.057 "enable_placement_id": 0, 00:06:19.057 "enable_zerocopy_send_server": true, 00:06:19.057 "enable_zerocopy_send_client": false, 00:06:19.057 "zerocopy_threshold": 0, 00:06:19.057 "tls_version": 0, 00:06:19.057 "enable_ktls": false 00:06:19.057 } 00:06:19.057 } 00:06:19.057 ] 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "subsystem": "vmd", 00:06:19.057 "config": [] 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "subsystem": "accel", 00:06:19.057 "config": [ 00:06:19.057 { 00:06:19.057 "method": "accel_set_options", 00:06:19.057 "params": { 00:06:19.057 "small_cache_size": 128, 00:06:19.057 "large_cache_size": 16, 00:06:19.057 "task_count": 2048, 00:06:19.057 "sequence_count": 2048, 00:06:19.057 "buf_count": 2048 00:06:19.057 } 00:06:19.057 } 00:06:19.057 ] 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "subsystem": "bdev", 00:06:19.057 "config": [ 00:06:19.057 { 00:06:19.057 "method": "bdev_set_options", 00:06:19.057 "params": { 00:06:19.057 "bdev_io_pool_size": 65535, 00:06:19.057 "bdev_io_cache_size": 256, 00:06:19.057 "bdev_auto_examine": true, 00:06:19.057 "iobuf_small_cache_size": 128, 00:06:19.057 "iobuf_large_cache_size": 16 00:06:19.057 } 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "method": "bdev_raid_set_options", 00:06:19.057 "params": { 00:06:19.057 "process_window_size_kb": 1024 00:06:19.057 } 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "method": "bdev_iscsi_set_options", 00:06:19.057 "params": { 00:06:19.057 "timeout_sec": 30 00:06:19.057 } 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "method": "bdev_nvme_set_options", 00:06:19.057 "params": { 00:06:19.057 "action_on_timeout": "none", 00:06:19.057 "timeout_us": 0, 00:06:19.057 "timeout_admin_us": 0, 00:06:19.057 "keep_alive_timeout_ms": 10000, 00:06:19.057 "arbitration_burst": 0, 00:06:19.057 "low_priority_weight": 0, 00:06:19.057 "medium_priority_weight": 0, 00:06:19.057 "high_priority_weight": 0, 00:06:19.057 "nvme_adminq_poll_period_us": 10000, 00:06:19.057 "nvme_ioq_poll_period_us": 0, 00:06:19.057 "io_queue_requests": 0, 00:06:19.057 "delay_cmd_submit": true, 00:06:19.057 "transport_retry_count": 4, 00:06:19.057 "bdev_retry_count": 3, 00:06:19.057 "transport_ack_timeout": 0, 00:06:19.057 "ctrlr_loss_timeout_sec": 0, 00:06:19.057 "reconnect_delay_sec": 0, 00:06:19.057 "fast_io_fail_timeout_sec": 0, 00:06:19.057 "disable_auto_failback": false, 00:06:19.057 "generate_uuids": false, 00:06:19.057 "transport_tos": 0, 00:06:19.057 "nvme_error_stat": false, 00:06:19.057 "rdma_srq_size": 0, 00:06:19.057 "io_path_stat": false, 00:06:19.057 "allow_accel_sequence": false, 00:06:19.057 "rdma_max_cq_size": 0, 00:06:19.057 "rdma_cm_event_timeout_ms": 0, 00:06:19.057 "dhchap_digests": [ 00:06:19.057 "sha256", 00:06:19.057 "sha384", 00:06:19.057 "sha512" 00:06:19.057 ], 00:06:19.057 "dhchap_dhgroups": [ 00:06:19.057 "null", 00:06:19.057 "ffdhe2048", 00:06:19.057 "ffdhe3072", 00:06:19.057 "ffdhe4096", 00:06:19.057 "ffdhe6144", 00:06:19.057 "ffdhe8192" 00:06:19.057 ] 00:06:19.057 } 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "method": "bdev_nvme_set_hotplug", 00:06:19.057 "params": { 00:06:19.057 "period_us": 100000, 00:06:19.057 "enable": false 00:06:19.057 } 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "method": "bdev_wait_for_examine" 00:06:19.057 } 00:06:19.057 ] 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "subsystem": "scsi", 00:06:19.057 "config": null 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "subsystem": "scheduler", 00:06:19.057 "config": [ 00:06:19.057 { 00:06:19.057 "method": "framework_set_scheduler", 00:06:19.057 "params": { 00:06:19.057 "name": "static" 00:06:19.057 } 00:06:19.057 } 00:06:19.057 ] 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "subsystem": "vhost_scsi", 00:06:19.057 "config": [] 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "subsystem": "vhost_blk", 00:06:19.057 "config": [] 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "subsystem": "ublk", 00:06:19.057 "config": [] 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "subsystem": "nbd", 00:06:19.057 "config": [] 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "subsystem": "nvmf", 00:06:19.057 "config": [ 00:06:19.057 { 00:06:19.057 "method": "nvmf_set_config", 00:06:19.057 "params": { 00:06:19.057 "discovery_filter": "match_any", 00:06:19.057 "admin_cmd_passthru": { 00:06:19.057 "identify_ctrlr": false 00:06:19.057 } 00:06:19.057 } 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "method": "nvmf_set_max_subsystems", 00:06:19.057 "params": { 00:06:19.057 "max_subsystems": 1024 00:06:19.057 } 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "method": "nvmf_set_crdt", 00:06:19.057 "params": { 00:06:19.057 "crdt1": 0, 00:06:19.057 "crdt2": 0, 00:06:19.057 "crdt3": 0 00:06:19.057 } 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "method": "nvmf_create_transport", 00:06:19.057 "params": { 00:06:19.057 "trtype": "TCP", 00:06:19.057 "max_queue_depth": 128, 00:06:19.057 "max_io_qpairs_per_ctrlr": 127, 00:06:19.057 "in_capsule_data_size": 4096, 00:06:19.057 "max_io_size": 131072, 00:06:19.057 "io_unit_size": 131072, 00:06:19.057 "max_aq_depth": 128, 00:06:19.057 "num_shared_buffers": 511, 00:06:19.057 "buf_cache_size": 4294967295, 00:06:19.057 "dif_insert_or_strip": false, 00:06:19.057 "zcopy": false, 00:06:19.057 "c2h_success": true, 00:06:19.057 "sock_priority": 0, 00:06:19.057 "abort_timeout_sec": 1, 00:06:19.057 "ack_timeout": 0, 00:06:19.057 "data_wr_pool_size": 0 00:06:19.057 } 00:06:19.057 } 00:06:19.057 ] 00:06:19.057 }, 00:06:19.057 { 00:06:19.057 "subsystem": "iscsi", 00:06:19.057 "config": [ 00:06:19.057 { 00:06:19.057 "method": "iscsi_set_options", 00:06:19.057 "params": { 00:06:19.057 "node_base": "iqn.2016-06.io.spdk", 00:06:19.057 "max_sessions": 128, 00:06:19.057 "max_connections_per_session": 2, 00:06:19.057 "max_queue_depth": 64, 00:06:19.057 "default_time2wait": 2, 00:06:19.057 "default_time2retain": 20, 00:06:19.057 "first_burst_length": 8192, 00:06:19.057 "immediate_data": true, 00:06:19.057 "allow_duplicated_isid": false, 00:06:19.057 "error_recovery_level": 0, 00:06:19.057 "nop_timeout": 60, 00:06:19.057 "nop_in_interval": 30, 00:06:19.057 "disable_chap": false, 00:06:19.057 "require_chap": false, 00:06:19.057 "mutual_chap": false, 00:06:19.057 "chap_group": 0, 00:06:19.057 "max_large_datain_per_connection": 64, 00:06:19.057 "max_r2t_per_connection": 4, 00:06:19.057 "pdu_pool_size": 36864, 00:06:19.058 "immediate_data_pool_size": 16384, 00:06:19.058 "data_out_pool_size": 2048 00:06:19.058 } 00:06:19.058 } 00:06:19.058 ] 00:06:19.058 } 00:06:19.058 ] 00:06:19.058 } 00:06:19.058 11:48:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:19.058 11:48:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 74769 00:06:19.058 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 74769 ']' 00:06:19.058 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 74769 00:06:19.058 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:19.058 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:19.058 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74769 00:06:19.058 killing process with pid 74769 00:06:19.058 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:19.058 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:19.058 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74769' 00:06:19.058 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 74769 00:06:19.058 11:48:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 74769 00:06:19.624 11:48:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=74795 00:06:19.624 11:48:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:19.624 11:48:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 74795 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 74795 ']' 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 74795 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74795 00:06:24.942 killing process with pid 74795 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74795' 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 74795 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 74795 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:24.942 ************************************ 00:06:24.942 END TEST skip_rpc_with_json 00:06:24.942 ************************************ 00:06:24.942 00:06:24.942 real 0m6.902s 00:06:24.942 user 0m6.451s 00:06:24.942 sys 0m0.712s 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:24.942 11:48:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:24.942 11:48:23 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:24.942 11:48:23 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:24.942 11:48:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.942 ************************************ 00:06:24.942 START TEST skip_rpc_with_delay 00:06:24.942 ************************************ 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.942 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.943 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.943 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:24.943 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:24.943 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:24.943 [2024-07-21 11:48:23.782440] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:24.943 [2024-07-21 11:48:23.782628] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:25.202 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:25.202 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.202 ************************************ 00:06:25.202 END TEST skip_rpc_with_delay 00:06:25.202 ************************************ 00:06:25.202 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:25.202 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.202 00:06:25.202 real 0m0.167s 00:06:25.202 user 0m0.085s 00:06:25.202 sys 0m0.080s 00:06:25.202 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:25.202 11:48:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:25.202 11:48:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:25.202 11:48:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:25.202 11:48:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:25.202 11:48:23 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:25.202 11:48:23 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:25.202 11:48:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.202 ************************************ 00:06:25.202 START TEST exit_on_failed_rpc_init 00:06:25.202 ************************************ 00:06:25.202 11:48:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:06:25.202 11:48:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=74907 00:06:25.202 11:48:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.202 11:48:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 74907 00:06:25.202 11:48:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 74907 ']' 00:06:25.202 11:48:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.202 11:48:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.202 11:48:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.202 11:48:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.202 11:48:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:25.202 [2024-07-21 11:48:24.014142] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:25.202 [2024-07-21 11:48:24.014274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74907 ] 00:06:25.460 [2024-07-21 11:48:24.177099] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.460 [2024-07-21 11:48:24.231418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:26.027 11:48:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:26.285 [2024-07-21 11:48:24.910222] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:26.285 [2024-07-21 11:48:24.910359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74925 ] 00:06:26.285 [2024-07-21 11:48:25.071189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.285 [2024-07-21 11:48:25.119545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.285 [2024-07-21 11:48:25.119661] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:26.285 [2024-07-21 11:48:25.119679] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:26.285 [2024-07-21 11:48:25.119692] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 74907 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 74907 ']' 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 74907 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 74907 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:26.543 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 74907' 00:06:26.544 killing process with pid 74907 00:06:26.544 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 74907 00:06:26.544 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 74907 00:06:26.803 00:06:26.803 real 0m1.717s 00:06:26.803 user 0m1.829s 00:06:26.803 sys 0m0.500s 00:06:26.803 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:26.803 11:48:25 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:26.803 ************************************ 00:06:26.803 END TEST exit_on_failed_rpc_init 00:06:26.803 ************************************ 00:06:27.061 11:48:25 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:27.061 00:06:27.061 real 0m14.588s 00:06:27.061 user 0m13.527s 00:06:27.061 sys 0m1.856s 00:06:27.061 11:48:25 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.061 11:48:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.061 ************************************ 00:06:27.061 END TEST skip_rpc 00:06:27.061 ************************************ 00:06:27.061 11:48:25 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:27.061 11:48:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:27.061 11:48:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.061 11:48:25 -- common/autotest_common.sh@10 -- # set +x 00:06:27.061 ************************************ 00:06:27.061 START TEST rpc_client 00:06:27.061 ************************************ 00:06:27.061 11:48:25 rpc_client -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:27.061 * Looking for test storage... 00:06:27.061 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:27.061 11:48:25 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:27.061 OK 00:06:27.321 11:48:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:27.321 00:06:27.321 real 0m0.191s 00:06:27.321 user 0m0.077s 00:06:27.321 sys 0m0.122s 00:06:27.321 11:48:25 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.321 11:48:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:27.321 ************************************ 00:06:27.321 END TEST rpc_client 00:06:27.321 ************************************ 00:06:27.321 11:48:26 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:27.321 11:48:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:27.321 11:48:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.321 11:48:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.321 ************************************ 00:06:27.321 START TEST json_config 00:06:27.321 ************************************ 00:06:27.321 11:48:26 json_config -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:27.321 11:48:26 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:53483a59-0def-44b8-86f3-c10a14190d68 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=53483a59-0def-44b8-86f3-c10a14190d68 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.321 11:48:26 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.321 11:48:26 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.321 11:48:26 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.321 11:48:26 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.321 11:48:26 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.321 11:48:26 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.321 11:48:26 json_config -- paths/export.sh@5 -- # export PATH 00:06:27.321 11:48:26 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@47 -- # : 0 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:27.321 11:48:26 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:27.321 11:48:26 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:27.321 11:48:26 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:27.321 11:48:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:27.321 11:48:26 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:27.321 11:48:26 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:27.321 11:48:26 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:27.321 WARNING: No tests are enabled so not running JSON configuration tests 00:06:27.321 11:48:26 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:27.321 00:06:27.321 real 0m0.129s 00:06:27.321 user 0m0.056s 00:06:27.321 sys 0m0.070s 00:06:27.321 11:48:26 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:27.321 ************************************ 00:06:27.321 END TEST json_config 00:06:27.321 ************************************ 00:06:27.321 11:48:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.580 11:48:26 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:27.580 11:48:26 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:27.580 11:48:26 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:27.580 11:48:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.580 ************************************ 00:06:27.580 START TEST json_config_extra_key 00:06:27.580 ************************************ 00:06:27.580 11:48:26 json_config_extra_key -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:27.580 11:48:26 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:53483a59-0def-44b8-86f3-c10a14190d68 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=53483a59-0def-44b8-86f3-c10a14190d68 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:27.580 11:48:26 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:27.580 11:48:26 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:27.580 11:48:26 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:27.580 11:48:26 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.580 11:48:26 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.580 11:48:26 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.580 11:48:26 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:27.580 11:48:26 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:27.580 11:48:26 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:27.580 11:48:26 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:27.580 11:48:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:27.580 11:48:26 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:27.580 11:48:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:27.580 11:48:26 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:27.580 11:48:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:27.580 11:48:26 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:27.580 11:48:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:27.580 11:48:26 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:27.580 11:48:26 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:27.580 11:48:26 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:27.580 INFO: launching applications... 00:06:27.580 11:48:26 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:27.580 11:48:26 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:27.581 11:48:26 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:27.581 11:48:26 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:27.581 11:48:26 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:27.581 11:48:26 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:27.581 11:48:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.581 11:48:26 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:27.581 11:48:26 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=75083 00:06:27.581 11:48:26 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:27.581 Waiting for target to run... 00:06:27.581 11:48:26 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:27.581 11:48:26 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 75083 /var/tmp/spdk_tgt.sock 00:06:27.581 11:48:26 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 75083 ']' 00:06:27.581 11:48:26 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:27.581 11:48:26 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:27.581 11:48:26 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:27.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:27.581 11:48:26 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:27.581 11:48:26 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:27.581 [2024-07-21 11:48:26.426931] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:27.581 [2024-07-21 11:48:26.427516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75083 ] 00:06:28.147 [2024-07-21 11:48:26.810117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.147 [2024-07-21 11:48:26.842727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.405 00:06:28.405 INFO: shutting down applications... 00:06:28.405 11:48:27 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:28.405 11:48:27 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:06:28.405 11:48:27 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:28.405 11:48:27 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:28.405 11:48:27 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:28.406 11:48:27 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:28.406 11:48:27 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:28.406 11:48:27 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 75083 ]] 00:06:28.406 11:48:27 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 75083 00:06:28.406 11:48:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:28.406 11:48:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.406 11:48:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 75083 00:06:28.406 11:48:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:28.972 11:48:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:28.972 11:48:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.972 11:48:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 75083 00:06:28.972 11:48:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:28.972 11:48:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:28.972 11:48:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:28.972 11:48:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:28.972 SPDK target shutdown done 00:06:28.972 11:48:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:28.972 Success 00:06:28.972 00:06:28.972 real 0m1.541s 00:06:28.972 user 0m1.260s 00:06:28.972 sys 0m0.458s 00:06:28.972 11:48:27 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:28.972 11:48:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:28.972 ************************************ 00:06:28.972 END TEST json_config_extra_key 00:06:28.972 ************************************ 00:06:28.972 11:48:27 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:28.972 11:48:27 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:28.972 11:48:27 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:28.972 11:48:27 -- common/autotest_common.sh@10 -- # set +x 00:06:28.972 ************************************ 00:06:28.972 START TEST alias_rpc 00:06:28.972 ************************************ 00:06:28.972 11:48:27 alias_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:29.231 * Looking for test storage... 00:06:29.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:29.231 11:48:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:29.231 11:48:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=75149 00:06:29.231 11:48:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:29.231 11:48:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 75149 00:06:29.231 11:48:27 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 75149 ']' 00:06:29.231 11:48:27 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.231 11:48:27 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:29.231 11:48:27 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.231 11:48:27 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:29.231 11:48:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.231 [2024-07-21 11:48:28.019932] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:29.231 [2024-07-21 11:48:28.020147] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75149 ] 00:06:29.490 [2024-07-21 11:48:28.169556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.490 [2024-07-21 11:48:28.216006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.056 11:48:28 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:30.056 11:48:28 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:30.056 11:48:28 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:30.314 11:48:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 75149 00:06:30.314 11:48:29 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 75149 ']' 00:06:30.314 11:48:29 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 75149 00:06:30.314 11:48:29 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:06:30.314 11:48:29 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:30.314 11:48:29 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75149 00:06:30.314 11:48:29 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:30.314 11:48:29 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:30.314 killing process with pid 75149 00:06:30.314 11:48:29 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75149' 00:06:30.314 11:48:29 alias_rpc -- common/autotest_common.sh@965 -- # kill 75149 00:06:30.314 11:48:29 alias_rpc -- common/autotest_common.sh@970 -- # wait 75149 00:06:30.573 00:06:30.573 real 0m1.584s 00:06:30.573 user 0m1.592s 00:06:30.573 sys 0m0.453s 00:06:30.573 11:48:29 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.573 11:48:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.573 ************************************ 00:06:30.573 END TEST alias_rpc 00:06:30.573 ************************************ 00:06:30.832 11:48:29 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:30.832 11:48:29 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:30.832 11:48:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:30.832 11:48:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.832 11:48:29 -- common/autotest_common.sh@10 -- # set +x 00:06:30.832 ************************************ 00:06:30.832 START TEST spdkcli_tcp 00:06:30.832 ************************************ 00:06:30.832 11:48:29 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:30.832 * Looking for test storage... 00:06:30.832 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:30.832 11:48:29 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:30.832 11:48:29 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:30.832 11:48:29 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:30.832 11:48:29 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:30.832 11:48:29 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:30.832 11:48:29 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:30.832 11:48:29 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:30.832 11:48:29 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:30.832 11:48:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.832 11:48:29 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=75220 00:06:30.832 11:48:29 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:30.832 11:48:29 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 75220 00:06:30.832 11:48:29 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 75220 ']' 00:06:30.832 11:48:29 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.832 11:48:29 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:30.832 11:48:29 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.832 11:48:29 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:30.832 11:48:29 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.832 [2024-07-21 11:48:29.688526] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:30.832 [2024-07-21 11:48:29.688665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75220 ] 00:06:31.091 [2024-07-21 11:48:29.856927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.091 [2024-07-21 11:48:29.909626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.091 [2024-07-21 11:48:29.909749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.026 11:48:30 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.026 11:48:30 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:06:32.026 11:48:30 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:32.026 11:48:30 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=75237 00:06:32.026 11:48:30 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:32.026 [ 00:06:32.026 "bdev_malloc_delete", 00:06:32.026 "bdev_malloc_create", 00:06:32.026 "bdev_null_resize", 00:06:32.026 "bdev_null_delete", 00:06:32.026 "bdev_null_create", 00:06:32.026 "bdev_nvme_cuse_unregister", 00:06:32.026 "bdev_nvme_cuse_register", 00:06:32.026 "bdev_opal_new_user", 00:06:32.026 "bdev_opal_set_lock_state", 00:06:32.026 "bdev_opal_delete", 00:06:32.026 "bdev_opal_get_info", 00:06:32.026 "bdev_opal_create", 00:06:32.026 "bdev_nvme_opal_revert", 00:06:32.026 "bdev_nvme_opal_init", 00:06:32.026 "bdev_nvme_send_cmd", 00:06:32.026 "bdev_nvme_get_path_iostat", 00:06:32.026 "bdev_nvme_get_mdns_discovery_info", 00:06:32.026 "bdev_nvme_stop_mdns_discovery", 00:06:32.026 "bdev_nvme_start_mdns_discovery", 00:06:32.026 "bdev_nvme_set_multipath_policy", 00:06:32.026 "bdev_nvme_set_preferred_path", 00:06:32.026 "bdev_nvme_get_io_paths", 00:06:32.026 "bdev_nvme_remove_error_injection", 00:06:32.026 "bdev_nvme_add_error_injection", 00:06:32.026 "bdev_nvme_get_discovery_info", 00:06:32.026 "bdev_nvme_stop_discovery", 00:06:32.026 "bdev_nvme_start_discovery", 00:06:32.026 "bdev_nvme_get_controller_health_info", 00:06:32.026 "bdev_nvme_disable_controller", 00:06:32.026 "bdev_nvme_enable_controller", 00:06:32.026 "bdev_nvme_reset_controller", 00:06:32.026 "bdev_nvme_get_transport_statistics", 00:06:32.026 "bdev_nvme_apply_firmware", 00:06:32.026 "bdev_nvme_detach_controller", 00:06:32.026 "bdev_nvme_get_controllers", 00:06:32.026 "bdev_nvme_attach_controller", 00:06:32.026 "bdev_nvme_set_hotplug", 00:06:32.026 "bdev_nvme_set_options", 00:06:32.026 "bdev_passthru_delete", 00:06:32.026 "bdev_passthru_create", 00:06:32.026 "bdev_lvol_set_parent_bdev", 00:06:32.026 "bdev_lvol_set_parent", 00:06:32.026 "bdev_lvol_check_shallow_copy", 00:06:32.026 "bdev_lvol_start_shallow_copy", 00:06:32.026 "bdev_lvol_grow_lvstore", 00:06:32.026 "bdev_lvol_get_lvols", 00:06:32.026 "bdev_lvol_get_lvstores", 00:06:32.026 "bdev_lvol_delete", 00:06:32.026 "bdev_lvol_set_read_only", 00:06:32.027 "bdev_lvol_resize", 00:06:32.027 "bdev_lvol_decouple_parent", 00:06:32.027 "bdev_lvol_inflate", 00:06:32.027 "bdev_lvol_rename", 00:06:32.027 "bdev_lvol_clone_bdev", 00:06:32.027 "bdev_lvol_clone", 00:06:32.027 "bdev_lvol_snapshot", 00:06:32.027 "bdev_lvol_create", 00:06:32.027 "bdev_lvol_delete_lvstore", 00:06:32.027 "bdev_lvol_rename_lvstore", 00:06:32.027 "bdev_lvol_create_lvstore", 00:06:32.027 "bdev_raid_set_options", 00:06:32.027 "bdev_raid_remove_base_bdev", 00:06:32.027 "bdev_raid_add_base_bdev", 00:06:32.027 "bdev_raid_delete", 00:06:32.027 "bdev_raid_create", 00:06:32.027 "bdev_raid_get_bdevs", 00:06:32.027 "bdev_error_inject_error", 00:06:32.027 "bdev_error_delete", 00:06:32.027 "bdev_error_create", 00:06:32.027 "bdev_split_delete", 00:06:32.027 "bdev_split_create", 00:06:32.027 "bdev_delay_delete", 00:06:32.027 "bdev_delay_create", 00:06:32.027 "bdev_delay_update_latency", 00:06:32.027 "bdev_zone_block_delete", 00:06:32.027 "bdev_zone_block_create", 00:06:32.027 "blobfs_create", 00:06:32.027 "blobfs_detect", 00:06:32.027 "blobfs_set_cache_size", 00:06:32.027 "bdev_xnvme_delete", 00:06:32.027 "bdev_xnvme_create", 00:06:32.027 "bdev_aio_delete", 00:06:32.027 "bdev_aio_rescan", 00:06:32.027 "bdev_aio_create", 00:06:32.027 "bdev_ftl_set_property", 00:06:32.027 "bdev_ftl_get_properties", 00:06:32.027 "bdev_ftl_get_stats", 00:06:32.027 "bdev_ftl_unmap", 00:06:32.027 "bdev_ftl_unload", 00:06:32.027 "bdev_ftl_delete", 00:06:32.027 "bdev_ftl_load", 00:06:32.027 "bdev_ftl_create", 00:06:32.027 "bdev_virtio_attach_controller", 00:06:32.027 "bdev_virtio_scsi_get_devices", 00:06:32.027 "bdev_virtio_detach_controller", 00:06:32.027 "bdev_virtio_blk_set_hotplug", 00:06:32.027 "bdev_iscsi_delete", 00:06:32.027 "bdev_iscsi_create", 00:06:32.027 "bdev_iscsi_set_options", 00:06:32.027 "accel_error_inject_error", 00:06:32.027 "ioat_scan_accel_module", 00:06:32.027 "dsa_scan_accel_module", 00:06:32.027 "iaa_scan_accel_module", 00:06:32.027 "keyring_file_remove_key", 00:06:32.027 "keyring_file_add_key", 00:06:32.027 "keyring_linux_set_options", 00:06:32.027 "iscsi_get_histogram", 00:06:32.027 "iscsi_enable_histogram", 00:06:32.027 "iscsi_set_options", 00:06:32.027 "iscsi_get_auth_groups", 00:06:32.027 "iscsi_auth_group_remove_secret", 00:06:32.027 "iscsi_auth_group_add_secret", 00:06:32.027 "iscsi_delete_auth_group", 00:06:32.027 "iscsi_create_auth_group", 00:06:32.027 "iscsi_set_discovery_auth", 00:06:32.027 "iscsi_get_options", 00:06:32.027 "iscsi_target_node_request_logout", 00:06:32.027 "iscsi_target_node_set_redirect", 00:06:32.027 "iscsi_target_node_set_auth", 00:06:32.027 "iscsi_target_node_add_lun", 00:06:32.027 "iscsi_get_stats", 00:06:32.027 "iscsi_get_connections", 00:06:32.027 "iscsi_portal_group_set_auth", 00:06:32.027 "iscsi_start_portal_group", 00:06:32.027 "iscsi_delete_portal_group", 00:06:32.027 "iscsi_create_portal_group", 00:06:32.027 "iscsi_get_portal_groups", 00:06:32.027 "iscsi_delete_target_node", 00:06:32.027 "iscsi_target_node_remove_pg_ig_maps", 00:06:32.027 "iscsi_target_node_add_pg_ig_maps", 00:06:32.027 "iscsi_create_target_node", 00:06:32.027 "iscsi_get_target_nodes", 00:06:32.027 "iscsi_delete_initiator_group", 00:06:32.027 "iscsi_initiator_group_remove_initiators", 00:06:32.027 "iscsi_initiator_group_add_initiators", 00:06:32.027 "iscsi_create_initiator_group", 00:06:32.027 "iscsi_get_initiator_groups", 00:06:32.027 "nvmf_set_crdt", 00:06:32.027 "nvmf_set_config", 00:06:32.027 "nvmf_set_max_subsystems", 00:06:32.027 "nvmf_stop_mdns_prr", 00:06:32.027 "nvmf_publish_mdns_prr", 00:06:32.027 "nvmf_subsystem_get_listeners", 00:06:32.027 "nvmf_subsystem_get_qpairs", 00:06:32.027 "nvmf_subsystem_get_controllers", 00:06:32.027 "nvmf_get_stats", 00:06:32.027 "nvmf_get_transports", 00:06:32.027 "nvmf_create_transport", 00:06:32.027 "nvmf_get_targets", 00:06:32.027 "nvmf_delete_target", 00:06:32.027 "nvmf_create_target", 00:06:32.027 "nvmf_subsystem_allow_any_host", 00:06:32.027 "nvmf_subsystem_remove_host", 00:06:32.027 "nvmf_subsystem_add_host", 00:06:32.027 "nvmf_ns_remove_host", 00:06:32.027 "nvmf_ns_add_host", 00:06:32.027 "nvmf_subsystem_remove_ns", 00:06:32.027 "nvmf_subsystem_add_ns", 00:06:32.027 "nvmf_subsystem_listener_set_ana_state", 00:06:32.027 "nvmf_discovery_get_referrals", 00:06:32.027 "nvmf_discovery_remove_referral", 00:06:32.027 "nvmf_discovery_add_referral", 00:06:32.027 "nvmf_subsystem_remove_listener", 00:06:32.027 "nvmf_subsystem_add_listener", 00:06:32.027 "nvmf_delete_subsystem", 00:06:32.027 "nvmf_create_subsystem", 00:06:32.027 "nvmf_get_subsystems", 00:06:32.027 "env_dpdk_get_mem_stats", 00:06:32.027 "nbd_get_disks", 00:06:32.027 "nbd_stop_disk", 00:06:32.027 "nbd_start_disk", 00:06:32.027 "ublk_recover_disk", 00:06:32.027 "ublk_get_disks", 00:06:32.027 "ublk_stop_disk", 00:06:32.027 "ublk_start_disk", 00:06:32.027 "ublk_destroy_target", 00:06:32.027 "ublk_create_target", 00:06:32.027 "virtio_blk_create_transport", 00:06:32.027 "virtio_blk_get_transports", 00:06:32.027 "vhost_controller_set_coalescing", 00:06:32.027 "vhost_get_controllers", 00:06:32.027 "vhost_delete_controller", 00:06:32.027 "vhost_create_blk_controller", 00:06:32.027 "vhost_scsi_controller_remove_target", 00:06:32.027 "vhost_scsi_controller_add_target", 00:06:32.027 "vhost_start_scsi_controller", 00:06:32.027 "vhost_create_scsi_controller", 00:06:32.027 "thread_set_cpumask", 00:06:32.027 "framework_get_scheduler", 00:06:32.027 "framework_set_scheduler", 00:06:32.027 "framework_get_reactors", 00:06:32.027 "thread_get_io_channels", 00:06:32.027 "thread_get_pollers", 00:06:32.027 "thread_get_stats", 00:06:32.027 "framework_monitor_context_switch", 00:06:32.027 "spdk_kill_instance", 00:06:32.027 "log_enable_timestamps", 00:06:32.027 "log_get_flags", 00:06:32.027 "log_clear_flag", 00:06:32.027 "log_set_flag", 00:06:32.027 "log_get_level", 00:06:32.027 "log_set_level", 00:06:32.027 "log_get_print_level", 00:06:32.027 "log_set_print_level", 00:06:32.027 "framework_enable_cpumask_locks", 00:06:32.027 "framework_disable_cpumask_locks", 00:06:32.027 "framework_wait_init", 00:06:32.027 "framework_start_init", 00:06:32.027 "scsi_get_devices", 00:06:32.027 "bdev_get_histogram", 00:06:32.027 "bdev_enable_histogram", 00:06:32.027 "bdev_set_qos_limit", 00:06:32.027 "bdev_set_qd_sampling_period", 00:06:32.027 "bdev_get_bdevs", 00:06:32.027 "bdev_reset_iostat", 00:06:32.027 "bdev_get_iostat", 00:06:32.027 "bdev_examine", 00:06:32.027 "bdev_wait_for_examine", 00:06:32.027 "bdev_set_options", 00:06:32.027 "notify_get_notifications", 00:06:32.027 "notify_get_types", 00:06:32.027 "accel_get_stats", 00:06:32.027 "accel_set_options", 00:06:32.027 "accel_set_driver", 00:06:32.027 "accel_crypto_key_destroy", 00:06:32.027 "accel_crypto_keys_get", 00:06:32.027 "accel_crypto_key_create", 00:06:32.027 "accel_assign_opc", 00:06:32.027 "accel_get_module_info", 00:06:32.027 "accel_get_opc_assignments", 00:06:32.027 "vmd_rescan", 00:06:32.027 "vmd_remove_device", 00:06:32.027 "vmd_enable", 00:06:32.027 "sock_get_default_impl", 00:06:32.028 "sock_set_default_impl", 00:06:32.028 "sock_impl_set_options", 00:06:32.028 "sock_impl_get_options", 00:06:32.028 "iobuf_get_stats", 00:06:32.028 "iobuf_set_options", 00:06:32.028 "framework_get_pci_devices", 00:06:32.028 "framework_get_config", 00:06:32.028 "framework_get_subsystems", 00:06:32.028 "trace_get_info", 00:06:32.028 "trace_get_tpoint_group_mask", 00:06:32.028 "trace_disable_tpoint_group", 00:06:32.028 "trace_enable_tpoint_group", 00:06:32.028 "trace_clear_tpoint_mask", 00:06:32.028 "trace_set_tpoint_mask", 00:06:32.028 "keyring_get_keys", 00:06:32.028 "spdk_get_version", 00:06:32.028 "rpc_get_methods" 00:06:32.028 ] 00:06:32.028 11:48:30 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:32.028 11:48:30 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.028 11:48:30 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.028 11:48:30 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:32.028 11:48:30 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 75220 00:06:32.028 11:48:30 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 75220 ']' 00:06:32.028 11:48:30 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 75220 00:06:32.028 11:48:30 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:06:32.028 11:48:30 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:32.028 11:48:30 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75220 00:06:32.028 11:48:30 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:32.028 11:48:30 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:32.028 11:48:30 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75220' 00:06:32.028 killing process with pid 75220 00:06:32.028 11:48:30 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 75220 00:06:32.028 11:48:30 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 75220 00:06:32.597 00:06:32.597 real 0m1.763s 00:06:32.597 user 0m3.040s 00:06:32.597 sys 0m0.519s 00:06:32.597 11:48:31 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:32.597 11:48:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:32.597 ************************************ 00:06:32.597 END TEST spdkcli_tcp 00:06:32.597 ************************************ 00:06:32.597 11:48:31 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:32.597 11:48:31 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:32.597 11:48:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:32.597 11:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:32.597 ************************************ 00:06:32.597 START TEST dpdk_mem_utility 00:06:32.597 ************************************ 00:06:32.597 11:48:31 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:32.597 * Looking for test storage... 00:06:32.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:32.597 11:48:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:32.597 11:48:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:32.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.597 11:48:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=75307 00:06:32.597 11:48:31 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 75307 00:06:32.597 11:48:31 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 75307 ']' 00:06:32.597 11:48:31 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.597 11:48:31 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.597 11:48:31 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.597 11:48:31 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.597 11:48:31 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:32.857 [2024-07-21 11:48:31.494664] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:32.857 [2024-07-21 11:48:31.494883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75307 ] 00:06:32.857 [2024-07-21 11:48:31.658814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.857 [2024-07-21 11:48:31.711741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.425 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:33.425 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:06:33.425 11:48:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:33.425 11:48:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:33.425 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.425 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:33.687 { 00:06:33.687 "filename": "/tmp/spdk_mem_dump.txt" 00:06:33.687 } 00:06:33.687 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.687 11:48:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:33.687 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:33.687 1 heaps totaling size 814.000000 MiB 00:06:33.687 size: 814.000000 MiB heap id: 0 00:06:33.687 end heaps---------- 00:06:33.687 8 mempools totaling size 598.116089 MiB 00:06:33.687 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:33.687 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:33.687 size: 84.521057 MiB name: bdev_io_75307 00:06:33.687 size: 51.011292 MiB name: evtpool_75307 00:06:33.687 size: 50.003479 MiB name: msgpool_75307 00:06:33.687 size: 21.763794 MiB name: PDU_Pool 00:06:33.687 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:33.687 size: 0.026123 MiB name: Session_Pool 00:06:33.687 end mempools------- 00:06:33.687 6 memzones totaling size 4.142822 MiB 00:06:33.687 size: 1.000366 MiB name: RG_ring_0_75307 00:06:33.687 size: 1.000366 MiB name: RG_ring_1_75307 00:06:33.687 size: 1.000366 MiB name: RG_ring_4_75307 00:06:33.687 size: 1.000366 MiB name: RG_ring_5_75307 00:06:33.687 size: 0.125366 MiB name: RG_ring_2_75307 00:06:33.687 size: 0.015991 MiB name: RG_ring_3_75307 00:06:33.687 end memzones------- 00:06:33.687 11:48:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:33.687 heap id: 0 total size: 814.000000 MiB number of busy elements: 299 number of free elements: 15 00:06:33.687 list of free elements. size: 12.472107 MiB 00:06:33.687 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:33.687 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:33.687 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:33.687 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:33.687 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:33.687 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:33.687 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:33.687 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:33.687 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:33.687 element at address: 0x20001aa00000 with size: 0.568604 MiB 00:06:33.687 element at address: 0x20000b200000 with size: 0.489624 MiB 00:06:33.687 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:33.687 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:33.687 element at address: 0x200027e00000 with size: 0.395935 MiB 00:06:33.687 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:33.687 list of standard malloc elements. size: 199.265320 MiB 00:06:33.687 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:33.687 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:33.687 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:33.687 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:33.687 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:33.687 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:33.687 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:33.687 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:33.687 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:33.687 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:33.687 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa91900 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:33.688 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:33.689 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e65680 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:33.689 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:33.689 list of memzone associated elements. size: 602.262573 MiB 00:06:33.689 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:33.689 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:33.689 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:33.689 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:33.689 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:33.689 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_75307_0 00:06:33.689 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:33.689 associated memzone info: size: 48.002930 MiB name: MP_evtpool_75307_0 00:06:33.689 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:33.689 associated memzone info: size: 48.002930 MiB name: MP_msgpool_75307_0 00:06:33.689 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:33.689 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:33.689 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:33.689 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:33.689 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:33.689 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_75307 00:06:33.689 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:33.689 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_75307 00:06:33.689 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:33.689 associated memzone info: size: 1.007996 MiB name: MP_evtpool_75307 00:06:33.689 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:33.689 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:33.689 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:33.689 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:33.689 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:33.689 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:33.689 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:33.689 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:33.689 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:33.689 associated memzone info: size: 1.000366 MiB name: RG_ring_0_75307 00:06:33.689 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:33.689 associated memzone info: size: 1.000366 MiB name: RG_ring_1_75307 00:06:33.689 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:33.689 associated memzone info: size: 1.000366 MiB name: RG_ring_4_75307 00:06:33.689 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:33.689 associated memzone info: size: 1.000366 MiB name: RG_ring_5_75307 00:06:33.689 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:33.689 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_75307 00:06:33.689 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:33.689 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:33.689 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:33.689 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:33.689 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:33.689 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:33.689 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:33.689 associated memzone info: size: 0.125366 MiB name: RG_ring_2_75307 00:06:33.689 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:33.689 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:33.689 element at address: 0x200027e65740 with size: 0.023743 MiB 00:06:33.689 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:33.689 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:33.689 associated memzone info: size: 0.015991 MiB name: RG_ring_3_75307 00:06:33.689 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:06:33.689 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:33.689 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:33.689 associated memzone info: size: 0.000183 MiB name: MP_msgpool_75307 00:06:33.690 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:33.690 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_75307 00:06:33.690 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:06:33.690 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:33.690 11:48:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:33.690 11:48:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 75307 00:06:33.690 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 75307 ']' 00:06:33.690 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 75307 00:06:33.690 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:06:33.690 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:33.690 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75307 00:06:33.690 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:33.690 killing process with pid 75307 00:06:33.690 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:33.690 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75307' 00:06:33.690 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 75307 00:06:33.690 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 75307 00:06:33.967 00:06:33.967 real 0m1.533s 00:06:33.967 user 0m1.491s 00:06:33.967 sys 0m0.443s 00:06:33.967 ************************************ 00:06:33.967 END TEST dpdk_mem_utility 00:06:33.967 ************************************ 00:06:33.967 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:33.967 11:48:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:34.242 11:48:32 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:34.242 11:48:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:34.242 11:48:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.242 11:48:32 -- common/autotest_common.sh@10 -- # set +x 00:06:34.242 ************************************ 00:06:34.242 START TEST event 00:06:34.243 ************************************ 00:06:34.243 11:48:32 event -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:34.243 * Looking for test storage... 00:06:34.243 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:34.243 11:48:32 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:34.243 11:48:32 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:34.243 11:48:32 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.243 11:48:32 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:06:34.243 11:48:32 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:34.243 11:48:32 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.243 ************************************ 00:06:34.243 START TEST event_perf 00:06:34.243 ************************************ 00:06:34.243 11:48:33 event.event_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.243 Running I/O for 1 seconds...[2024-07-21 11:48:33.046407] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:34.243 [2024-07-21 11:48:33.046522] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75385 ] 00:06:34.500 [2024-07-21 11:48:33.224673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.500 [2024-07-21 11:48:33.286714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.500 [2024-07-21 11:48:33.286945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.500 [2024-07-21 11:48:33.287064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.500 Running I/O for 1 seconds...[2024-07-21 11:48:33.287006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.873 00:06:35.873 lcore 0: 195893 00:06:35.873 lcore 1: 195892 00:06:35.873 lcore 2: 195893 00:06:35.873 lcore 3: 195895 00:06:35.873 done. 00:06:35.873 00:06:35.873 real 0m1.381s 00:06:35.873 user 0m4.133s 00:06:35.873 sys 0m0.126s 00:06:35.873 11:48:34 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:35.873 11:48:34 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:35.873 ************************************ 00:06:35.873 END TEST event_perf 00:06:35.873 ************************************ 00:06:35.873 11:48:34 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:35.873 11:48:34 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:35.873 11:48:34 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:35.873 11:48:34 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.873 ************************************ 00:06:35.873 START TEST event_reactor 00:06:35.873 ************************************ 00:06:35.873 11:48:34 event.event_reactor -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:35.873 [2024-07-21 11:48:34.492957] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:35.873 [2024-07-21 11:48:34.493182] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75420 ] 00:06:35.873 [2024-07-21 11:48:34.654483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.873 [2024-07-21 11:48:34.700080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.257 test_start 00:06:37.257 oneshot 00:06:37.257 tick 100 00:06:37.257 tick 100 00:06:37.257 tick 250 00:06:37.257 tick 100 00:06:37.257 tick 100 00:06:37.257 tick 100 00:06:37.257 tick 250 00:06:37.257 tick 500 00:06:37.257 tick 100 00:06:37.257 tick 100 00:06:37.257 tick 250 00:06:37.257 tick 100 00:06:37.257 tick 100 00:06:37.257 test_end 00:06:37.257 00:06:37.257 real 0m1.345s 00:06:37.257 user 0m1.141s 00:06:37.257 sys 0m0.096s 00:06:37.257 11:48:35 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:37.258 11:48:35 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:37.258 ************************************ 00:06:37.258 END TEST event_reactor 00:06:37.258 ************************************ 00:06:37.258 11:48:35 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:37.258 11:48:35 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:37.258 11:48:35 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:37.258 11:48:35 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.258 ************************************ 00:06:37.258 START TEST event_reactor_perf 00:06:37.258 ************************************ 00:06:37.258 11:48:35 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:37.258 [2024-07-21 11:48:35.891312] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:37.258 [2024-07-21 11:48:35.891492] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75456 ] 00:06:37.258 [2024-07-21 11:48:36.052795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.258 [2024-07-21 11:48:36.097694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.634 test_start 00:06:38.634 test_end 00:06:38.634 Performance: 371656 events per second 00:06:38.634 00:06:38.634 real 0m1.342s 00:06:38.634 user 0m1.146s 00:06:38.634 sys 0m0.088s 00:06:38.634 11:48:37 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:38.634 11:48:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.634 ************************************ 00:06:38.634 END TEST event_reactor_perf 00:06:38.634 ************************************ 00:06:38.634 11:48:37 event -- event/event.sh@49 -- # uname -s 00:06:38.634 11:48:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:38.634 11:48:37 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:38.634 11:48:37 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:38.634 11:48:37 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:38.634 11:48:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.634 ************************************ 00:06:38.634 START TEST event_scheduler 00:06:38.634 ************************************ 00:06:38.634 11:48:37 event.event_scheduler -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:38.634 * Looking for test storage... 00:06:38.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:38.634 11:48:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:38.634 11:48:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:38.634 11:48:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=75520 00:06:38.634 11:48:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.634 11:48:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 75520 00:06:38.634 11:48:37 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 75520 ']' 00:06:38.634 11:48:37 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.634 11:48:37 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:38.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.634 11:48:37 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.634 11:48:37 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:38.634 11:48:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:38.634 [2024-07-21 11:48:37.457785] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:38.634 [2024-07-21 11:48:37.457969] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75520 ] 00:06:38.893 [2024-07-21 11:48:37.646031] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:38.893 [2024-07-21 11:48:37.697230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.893 [2024-07-21 11:48:37.697367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:38.893 [2024-07-21 11:48:37.697424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.893 [2024-07-21 11:48:37.697471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.461 11:48:38 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:39.461 11:48:38 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:06:39.461 11:48:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:39.461 11:48:38 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.461 11:48:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.461 POWER: Env isn't set yet! 00:06:39.461 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:39.461 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:39.461 POWER: Cannot set governor of lcore 0 to userspace 00:06:39.461 POWER: Attempting to initialise PSTAT power management... 00:06:39.461 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:39.461 POWER: Cannot set governor of lcore 0 to performance 00:06:39.461 POWER: Attempting to initialise AMD PSTATE power management... 00:06:39.461 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:39.461 POWER: Cannot set governor of lcore 0 to userspace 00:06:39.461 POWER: Attempting to initialise CPPC power management... 00:06:39.461 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:39.461 POWER: Cannot set governor of lcore 0 to userspace 00:06:39.461 POWER: Attempting to initialise VM power management... 00:06:39.461 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:39.461 POWER: Unable to set Power Management Environment for lcore 0 00:06:39.461 [2024-07-21 11:48:38.277732] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:39.461 [2024-07-21 11:48:38.277763] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:39.461 [2024-07-21 11:48:38.277775] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:39.461 [2024-07-21 11:48:38.277797] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:39.461 [2024-07-21 11:48:38.277809] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:39.461 [2024-07-21 11:48:38.277827] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:39.461 11:48:38 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.461 11:48:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:39.461 11:48:38 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.461 11:48:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 [2024-07-21 11:48:38.347718] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:39.728 11:48:38 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.728 11:48:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:39.728 11:48:38 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:39.728 11:48:38 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:39.728 11:48:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 ************************************ 00:06:39.728 START TEST scheduler_create_thread 00:06:39.728 ************************************ 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 2 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 3 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 4 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 5 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 6 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 7 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 8 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:39.728 9 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.728 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.323 10 00:06:40.323 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:40.323 11:48:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:40.323 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:40.323 11:48:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.702 11:48:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:41.702 11:48:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:41.702 11:48:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:41.702 11:48:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:41.702 11:48:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.269 11:48:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.269 11:48:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:42.269 11:48:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.269 11:48:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.205 11:48:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.205 11:48:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:43.205 11:48:41 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:43.205 11:48:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.205 11:48:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.771 11:48:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.771 00:06:43.771 real 0m4.209s 00:06:43.771 user 0m0.025s 00:06:43.771 sys 0m0.008s 00:06:43.771 11:48:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:43.771 11:48:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.771 ************************************ 00:06:43.771 END TEST scheduler_create_thread 00:06:43.771 ************************************ 00:06:43.772 11:48:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:43.772 11:48:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 75520 00:06:43.772 11:48:42 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 75520 ']' 00:06:43.772 11:48:42 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 75520 00:06:43.772 11:48:42 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:06:43.772 11:48:42 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:43.772 11:48:42 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75520 00:06:44.030 killing process with pid 75520 00:06:44.030 11:48:42 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:06:44.030 11:48:42 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:06:44.030 11:48:42 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75520' 00:06:44.030 11:48:42 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 75520 00:06:44.030 11:48:42 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 75520 00:06:44.030 [2024-07-21 11:48:42.845366] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:44.598 00:06:44.598 real 0m6.025s 00:06:44.598 user 0m12.996s 00:06:44.598 sys 0m0.467s 00:06:44.598 ************************************ 00:06:44.598 END TEST event_scheduler 00:06:44.598 ************************************ 00:06:44.598 11:48:43 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:44.598 11:48:43 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:44.598 11:48:43 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:44.598 11:48:43 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:44.598 11:48:43 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:06:44.598 11:48:43 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:44.598 11:48:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.598 ************************************ 00:06:44.598 START TEST app_repeat 00:06:44.598 ************************************ 00:06:44.598 11:48:43 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:06:44.598 11:48:43 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.598 11:48:43 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.598 11:48:43 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:44.598 11:48:43 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.598 11:48:43 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:44.598 11:48:43 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:44.598 11:48:43 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:44.598 11:48:43 event.app_repeat -- event/event.sh@19 -- # repeat_pid=75639 00:06:44.598 11:48:43 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:44.598 11:48:43 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:44.598 Process app_repeat pid: 75639 00:06:44.598 11:48:43 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 75639' 00:06:44.598 11:48:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:44.598 spdk_app_start Round 0 00:06:44.598 11:48:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:44.598 11:48:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75639 /var/tmp/spdk-nbd.sock 00:06:44.598 11:48:43 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75639 ']' 00:06:44.598 11:48:43 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.598 11:48:43 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:44.598 11:48:43 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.598 11:48:43 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:44.598 11:48:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:44.598 [2024-07-21 11:48:43.410159] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:06:44.598 [2024-07-21 11:48:43.410271] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75639 ] 00:06:44.856 [2024-07-21 11:48:43.571200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:44.856 [2024-07-21 11:48:43.618411] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.856 [2024-07-21 11:48:43.618545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:45.421 11:48:44 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:45.421 11:48:44 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:45.421 11:48:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.695 Malloc0 00:06:45.695 11:48:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.958 Malloc1 00:06:45.958 11:48:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.958 11:48:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:46.217 /dev/nbd0 00:06:46.217 11:48:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:46.217 11:48:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:46.217 11:48:44 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:46.217 11:48:44 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:46.217 11:48:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:46.217 11:48:44 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:46.217 11:48:44 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:46.217 11:48:44 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:46.217 11:48:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:46.217 11:48:44 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:46.217 11:48:44 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.217 1+0 records in 00:06:46.217 1+0 records out 00:06:46.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265357 s, 15.4 MB/s 00:06:46.217 11:48:44 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.217 11:48:44 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:46.217 11:48:44 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.217 11:48:44 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:46.217 11:48:44 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:46.217 11:48:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.217 11:48:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.217 11:48:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.476 /dev/nbd1 00:06:46.476 11:48:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.476 11:48:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.476 11:48:45 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:46.476 11:48:45 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:46.476 11:48:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:46.476 11:48:45 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:46.476 11:48:45 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:46.476 11:48:45 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:46.476 11:48:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:46.476 11:48:45 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:46.476 11:48:45 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.476 1+0 records in 00:06:46.476 1+0 records out 00:06:46.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319631 s, 12.8 MB/s 00:06:46.476 11:48:45 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.476 11:48:45 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:46.476 11:48:45 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:46.476 11:48:45 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:46.476 11:48:45 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:46.476 11:48:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.476 11:48:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.476 11:48:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.476 11:48:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.476 11:48:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.476 11:48:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.476 { 00:06:46.476 "nbd_device": "/dev/nbd0", 00:06:46.476 "bdev_name": "Malloc0" 00:06:46.476 }, 00:06:46.476 { 00:06:46.476 "nbd_device": "/dev/nbd1", 00:06:46.476 "bdev_name": "Malloc1" 00:06:46.476 } 00:06:46.476 ]' 00:06:46.476 11:48:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.476 { 00:06:46.476 "nbd_device": "/dev/nbd0", 00:06:46.476 "bdev_name": "Malloc0" 00:06:46.476 }, 00:06:46.476 { 00:06:46.476 "nbd_device": "/dev/nbd1", 00:06:46.476 "bdev_name": "Malloc1" 00:06:46.476 } 00:06:46.476 ]' 00:06:46.476 11:48:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.736 /dev/nbd1' 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.736 /dev/nbd1' 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.736 256+0 records in 00:06:46.736 256+0 records out 00:06:46.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130266 s, 80.5 MB/s 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.736 256+0 records in 00:06:46.736 256+0 records out 00:06:46.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227824 s, 46.0 MB/s 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.736 256+0 records in 00:06:46.736 256+0 records out 00:06:46.736 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245748 s, 42.7 MB/s 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.736 11:48:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.994 11:48:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.994 11:48:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.994 11:48:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.994 11:48:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.994 11:48:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.994 11:48:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.994 11:48:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.994 11:48:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.994 11:48:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.994 11:48:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.253 11:48:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.253 11:48:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.253 11:48:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.253 11:48:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.253 11:48:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.253 11:48:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.253 11:48:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:47.253 11:48:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.253 11:48:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.253 11:48:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.253 11:48:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.253 11:48:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:47.253 11:48:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:47.253 11:48:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:47.512 11:48:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:47.512 11:48:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:47.512 11:48:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:47.512 11:48:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:47.512 11:48:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:47.512 11:48:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:47.512 11:48:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:47.512 11:48:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:47.512 11:48:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:47.512 11:48:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.512 11:48:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.774 [2024-07-21 11:48:46.534664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.774 [2024-07-21 11:48:46.579539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.774 [2024-07-21 11:48:46.579544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.774 [2024-07-21 11:48:46.621392] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.774 [2024-07-21 11:48:46.621455] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:51.062 spdk_app_start Round 1 00:06:51.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:51.062 11:48:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:51.062 11:48:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:51.062 11:48:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75639 /var/tmp/spdk-nbd.sock 00:06:51.062 11:48:49 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75639 ']' 00:06:51.062 11:48:49 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:51.062 11:48:49 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.062 11:48:49 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:51.062 11:48:49 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.062 11:48:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:51.062 11:48:49 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.062 11:48:49 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:51.062 11:48:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.062 Malloc0 00:06:51.062 11:48:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:51.062 Malloc1 00:06:51.321 11:48:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.321 11:48:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:51.321 /dev/nbd0 00:06:51.321 11:48:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:51.321 11:48:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:51.321 11:48:50 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:51.321 11:48:50 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:51.321 11:48:50 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:51.321 11:48:50 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:51.321 11:48:50 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:51.321 11:48:50 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:51.321 11:48:50 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:51.321 11:48:50 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:51.321 11:48:50 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.321 1+0 records in 00:06:51.321 1+0 records out 00:06:51.321 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319454 s, 12.8 MB/s 00:06:51.321 11:48:50 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.321 11:48:50 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:51.321 11:48:50 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.321 11:48:50 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:51.321 11:48:50 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:51.321 11:48:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.321 11:48:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.321 11:48:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:51.578 /dev/nbd1 00:06:51.578 11:48:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:51.578 11:48:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:51.578 11:48:50 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:51.578 11:48:50 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:51.578 11:48:50 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:51.578 11:48:50 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:51.578 11:48:50 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:51.578 11:48:50 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:51.578 11:48:50 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:51.578 11:48:50 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:51.578 11:48:50 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:51.578 1+0 records in 00:06:51.578 1+0 records out 00:06:51.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000338564 s, 12.1 MB/s 00:06:51.578 11:48:50 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.578 11:48:50 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:51.578 11:48:50 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:51.578 11:48:50 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:51.578 11:48:50 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:51.578 11:48:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:51.578 11:48:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:51.578 11:48:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:51.578 11:48:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.578 11:48:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.841 { 00:06:51.841 "nbd_device": "/dev/nbd0", 00:06:51.841 "bdev_name": "Malloc0" 00:06:51.841 }, 00:06:51.841 { 00:06:51.841 "nbd_device": "/dev/nbd1", 00:06:51.841 "bdev_name": "Malloc1" 00:06:51.841 } 00:06:51.841 ]' 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.841 { 00:06:51.841 "nbd_device": "/dev/nbd0", 00:06:51.841 "bdev_name": "Malloc0" 00:06:51.841 }, 00:06:51.841 { 00:06:51.841 "nbd_device": "/dev/nbd1", 00:06:51.841 "bdev_name": "Malloc1" 00:06:51.841 } 00:06:51.841 ]' 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.841 /dev/nbd1' 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.841 /dev/nbd1' 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:51.841 256+0 records in 00:06:51.841 256+0 records out 00:06:51.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00479012 s, 219 MB/s 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:51.841 256+0 records in 00:06:51.841 256+0 records out 00:06:51.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0215255 s, 48.7 MB/s 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.841 256+0 records in 00:06:51.841 256+0 records out 00:06:51.841 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.026772 s, 39.2 MB/s 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.841 11:48:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.105 11:48:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.364 11:48:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.364 11:48:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.364 11:48:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.364 11:48:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.364 11:48:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.364 11:48:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.364 11:48:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:52.364 11:48:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.364 11:48:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.364 11:48:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.364 11:48:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.623 11:48:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.623 11:48:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.623 11:48:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.623 11:48:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.623 11:48:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.623 11:48:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.623 11:48:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:52.623 11:48:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.623 11:48:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.623 11:48:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.623 11:48:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.623 11:48:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.623 11:48:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.882 11:48:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:52.882 [2024-07-21 11:48:51.697485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.882 [2024-07-21 11:48:51.739388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.882 [2024-07-21 11:48:51.739433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.141 [2024-07-21 11:48:51.781424] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:53.141 [2024-07-21 11:48:51.781482] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.429 spdk_app_start Round 2 00:06:56.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.429 11:48:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:56.429 11:48:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:56.429 11:48:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 75639 /var/tmp/spdk-nbd.sock 00:06:56.429 11:48:54 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75639 ']' 00:06:56.429 11:48:54 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.429 11:48:54 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:56.429 11:48:54 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.429 11:48:54 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:56.429 11:48:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:56.429 11:48:54 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:56.429 11:48:54 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:06:56.429 11:48:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.429 Malloc0 00:06:56.429 11:48:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:56.429 Malloc1 00:06:56.429 11:48:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.429 11:48:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.429 11:48:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.429 11:48:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:56.429 11:48:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.429 11:48:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:56.429 11:48:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:56.429 11:48:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.429 11:48:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:56.429 11:48:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:56.429 11:48:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:56.429 11:48:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:56.429 11:48:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:56.430 11:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:56.430 11:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.430 11:48:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:56.688 /dev/nbd0 00:06:56.688 11:48:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:56.688 11:48:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:56.688 11:48:55 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:06:56.688 11:48:55 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:56.688 11:48:55 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:56.688 11:48:55 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:56.688 11:48:55 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:06:56.688 11:48:55 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:56.688 11:48:55 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:56.688 11:48:55 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:56.688 11:48:55 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.688 1+0 records in 00:06:56.688 1+0 records out 00:06:56.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000402418 s, 10.2 MB/s 00:06:56.688 11:48:55 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.688 11:48:55 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:56.688 11:48:55 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.688 11:48:55 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:56.688 11:48:55 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:56.688 11:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.688 11:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.688 11:48:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:56.946 /dev/nbd1 00:06:56.946 11:48:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:56.946 11:48:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:56.946 11:48:55 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:06:56.946 11:48:55 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:06:56.946 11:48:55 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:06:56.946 11:48:55 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:06:56.946 11:48:55 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:06:56.946 11:48:55 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:06:56.946 11:48:55 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:06:56.946 11:48:55 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:06:56.946 11:48:55 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:56.946 1+0 records in 00:06:56.946 1+0 records out 00:06:56.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328872 s, 12.5 MB/s 00:06:56.946 11:48:55 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.946 11:48:55 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:06:56.946 11:48:55 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:56.946 11:48:55 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:06:56.946 11:48:55 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:06:56.946 11:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:56.946 11:48:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:56.946 11:48:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:56.946 11:48:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:56.946 11:48:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:57.205 { 00:06:57.205 "nbd_device": "/dev/nbd0", 00:06:57.205 "bdev_name": "Malloc0" 00:06:57.205 }, 00:06:57.205 { 00:06:57.205 "nbd_device": "/dev/nbd1", 00:06:57.205 "bdev_name": "Malloc1" 00:06:57.205 } 00:06:57.205 ]' 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:57.205 { 00:06:57.205 "nbd_device": "/dev/nbd0", 00:06:57.205 "bdev_name": "Malloc0" 00:06:57.205 }, 00:06:57.205 { 00:06:57.205 "nbd_device": "/dev/nbd1", 00:06:57.205 "bdev_name": "Malloc1" 00:06:57.205 } 00:06:57.205 ]' 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:57.205 /dev/nbd1' 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:57.205 /dev/nbd1' 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:57.205 256+0 records in 00:06:57.205 256+0 records out 00:06:57.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131701 s, 79.6 MB/s 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:57.205 256+0 records in 00:06:57.205 256+0 records out 00:06:57.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237748 s, 44.1 MB/s 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:57.205 256+0 records in 00:06:57.205 256+0 records out 00:06:57.205 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.032433 s, 32.3 MB/s 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.205 11:48:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:57.205 11:48:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:57.205 11:48:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:57.205 11:48:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:57.205 11:48:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:57.205 11:48:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.205 11:48:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:57.205 11:48:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:57.205 11:48:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:57.205 11:48:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.205 11:48:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:57.464 11:48:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:57.464 11:48:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:57.464 11:48:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:57.464 11:48:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.464 11:48:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.464 11:48:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:57.464 11:48:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.464 11:48:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.464 11:48:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:57.464 11:48:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:57.723 11:48:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:57.723 11:48:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:57.723 11:48:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:57.723 11:48:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:57.723 11:48:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:57.723 11:48:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:57.723 11:48:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:57.723 11:48:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:57.723 11:48:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:57.723 11:48:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:57.723 11:48:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:57.983 11:48:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:57.983 11:48:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:57.983 11:48:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:57.983 11:48:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:57.983 11:48:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:57.983 11:48:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:57.983 11:48:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:57.983 11:48:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:57.983 11:48:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:57.983 11:48:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:57.983 11:48:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:57.983 11:48:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:57.983 11:48:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:58.244 11:48:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:58.504 [2024-07-21 11:48:57.211251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.504 [2024-07-21 11:48:57.259064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.504 [2024-07-21 11:48:57.259069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.504 [2024-07-21 11:48:57.304255] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:58.504 [2024-07-21 11:48:57.304327] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:01.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:01.806 11:49:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 75639 /var/tmp/spdk-nbd.sock 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 75639 ']' 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:07:01.807 11:49:00 event.app_repeat -- event/event.sh@39 -- # killprocess 75639 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 75639 ']' 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 75639 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 75639 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 75639' 00:07:01.807 killing process with pid 75639 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@965 -- # kill 75639 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@970 -- # wait 75639 00:07:01.807 spdk_app_start is called in Round 0. 00:07:01.807 Shutdown signal received, stop current app iteration 00:07:01.807 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:07:01.807 spdk_app_start is called in Round 1. 00:07:01.807 Shutdown signal received, stop current app iteration 00:07:01.807 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:07:01.807 spdk_app_start is called in Round 2. 00:07:01.807 Shutdown signal received, stop current app iteration 00:07:01.807 Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 reinitialization... 00:07:01.807 spdk_app_start is called in Round 3. 00:07:01.807 Shutdown signal received, stop current app iteration 00:07:01.807 11:49:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:01.807 11:49:00 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:01.807 00:07:01.807 real 0m17.138s 00:07:01.807 user 0m37.641s 00:07:01.807 sys 0m2.418s 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:01.807 11:49:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:01.807 ************************************ 00:07:01.807 END TEST app_repeat 00:07:01.807 ************************************ 00:07:01.807 11:49:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:01.807 11:49:00 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:01.807 11:49:00 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:01.807 11:49:00 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:01.807 11:49:00 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.807 ************************************ 00:07:01.807 START TEST cpu_locks 00:07:01.807 ************************************ 00:07:01.807 11:49:00 event.cpu_locks -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:01.807 * Looking for test storage... 00:07:01.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:01.807 11:49:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:01.807 11:49:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:02.065 11:49:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:02.065 11:49:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:02.065 11:49:00 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:02.065 11:49:00 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:02.065 11:49:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.065 ************************************ 00:07:02.065 START TEST default_locks 00:07:02.065 ************************************ 00:07:02.065 11:49:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:07:02.065 11:49:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=76051 00:07:02.065 11:49:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.065 11:49:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 76051 00:07:02.065 11:49:00 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 76051 ']' 00:07:02.065 11:49:00 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.065 11:49:00 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:02.065 11:49:00 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.065 11:49:00 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:02.065 11:49:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.065 [2024-07-21 11:49:00.773484] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:02.065 [2024-07-21 11:49:00.773619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76051 ] 00:07:02.065 [2024-07-21 11:49:00.927358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.323 [2024-07-21 11:49:00.975417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.891 11:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:02.891 11:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:07:02.891 11:49:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 76051 00:07:02.891 11:49:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 76051 00:07:02.891 11:49:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:03.150 11:49:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 76051 00:07:03.150 11:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 76051 ']' 00:07:03.150 11:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 76051 00:07:03.150 11:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:07:03.150 11:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:03.150 11:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76051 00:07:03.150 killing process with pid 76051 00:07:03.150 11:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:03.150 11:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:03.150 11:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76051' 00:07:03.150 11:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 76051 00:07:03.150 11:49:01 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 76051 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 76051 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76051 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 76051 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 76051 ']' 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.717 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76051) - No such process 00:07:03.717 ERROR: process (pid: 76051) is no longer running 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:03.717 ************************************ 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:03.717 00:07:03.717 real 0m1.654s 00:07:03.717 user 0m1.586s 00:07:03.717 sys 0m0.576s 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:03.717 11:49:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.717 END TEST default_locks 00:07:03.717 ************************************ 00:07:03.717 11:49:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:03.717 11:49:02 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:03.717 11:49:02 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:03.717 11:49:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.717 ************************************ 00:07:03.717 START TEST default_locks_via_rpc 00:07:03.717 ************************************ 00:07:03.717 11:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:07:03.717 11:49:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=76098 00:07:03.717 11:49:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.717 11:49:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 76098 00:07:03.717 11:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76098 ']' 00:07:03.717 11:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.717 11:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:03.717 11:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.717 11:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:03.717 11:49:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.717 [2024-07-21 11:49:02.485592] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:03.717 [2024-07-21 11:49:02.485708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76098 ] 00:07:03.976 [2024-07-21 11:49:02.642976] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.976 [2024-07-21 11:49:02.689142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 76098 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 76098 00:07:04.547 11:49:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.116 11:49:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 76098 00:07:05.116 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 76098 ']' 00:07:05.116 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 76098 00:07:05.116 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:07:05.116 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:05.116 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76098 00:07:05.116 killing process with pid 76098 00:07:05.116 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:05.116 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:05.116 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76098' 00:07:05.116 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 76098 00:07:05.116 11:49:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 76098 00:07:05.375 00:07:05.375 real 0m1.822s 00:07:05.375 user 0m1.797s 00:07:05.375 sys 0m0.622s 00:07:05.375 ************************************ 00:07:05.375 END TEST default_locks_via_rpc 00:07:05.375 ************************************ 00:07:05.375 11:49:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:05.375 11:49:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.634 11:49:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:05.634 11:49:04 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:05.634 11:49:04 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:05.634 11:49:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:05.634 ************************************ 00:07:05.634 START TEST non_locking_app_on_locked_coremask 00:07:05.634 ************************************ 00:07:05.634 11:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:07:05.634 11:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=76145 00:07:05.634 11:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.634 11:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 76145 /var/tmp/spdk.sock 00:07:05.634 11:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76145 ']' 00:07:05.634 11:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.634 11:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:05.634 11:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.634 11:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:05.634 11:49:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.634 [2024-07-21 11:49:04.388912] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:05.634 [2024-07-21 11:49:04.389192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76145 ] 00:07:05.894 [2024-07-21 11:49:04.549076] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.894 [2024-07-21 11:49:04.599116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.461 11:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:06.461 11:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:06.461 11:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=76161 00:07:06.461 11:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:06.462 11:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 76161 /var/tmp/spdk2.sock 00:07:06.462 11:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76161 ']' 00:07:06.462 11:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.462 11:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:06.462 11:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.462 11:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:06.462 11:49:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.462 [2024-07-21 11:49:05.320286] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:06.462 [2024-07-21 11:49:05.320516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76161 ] 00:07:06.720 [2024-07-21 11:49:05.472732] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:06.720 [2024-07-21 11:49:05.472826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.720 [2024-07-21 11:49:05.579827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.286 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:07.286 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:07.286 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 76145 00:07:07.286 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76145 00:07:07.286 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.851 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 76145 00:07:07.851 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76145 ']' 00:07:07.851 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 76145 00:07:07.851 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:07.851 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:07.851 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76145 00:07:07.851 killing process with pid 76145 00:07:07.851 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:07.851 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:07.851 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76145' 00:07:07.851 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 76145 00:07:07.851 11:49:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 76145 00:07:08.787 11:49:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 76161 00:07:08.787 11:49:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76161 ']' 00:07:08.787 11:49:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 76161 00:07:08.787 11:49:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:08.787 11:49:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:08.787 11:49:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76161 00:07:08.787 killing process with pid 76161 00:07:08.787 11:49:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:08.787 11:49:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:08.787 11:49:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76161' 00:07:08.787 11:49:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 76161 00:07:08.787 11:49:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 76161 00:07:09.046 ************************************ 00:07:09.046 END TEST non_locking_app_on_locked_coremask 00:07:09.046 ************************************ 00:07:09.046 00:07:09.046 real 0m3.409s 00:07:09.046 user 0m3.596s 00:07:09.046 sys 0m0.995s 00:07:09.046 11:49:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:09.046 11:49:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.046 11:49:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:09.046 11:49:07 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:09.046 11:49:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:09.046 11:49:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.046 ************************************ 00:07:09.046 START TEST locking_app_on_unlocked_coremask 00:07:09.046 ************************************ 00:07:09.046 11:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:07:09.046 11:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=76229 00:07:09.046 11:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:09.046 11:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 76229 /var/tmp/spdk.sock 00:07:09.046 11:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76229 ']' 00:07:09.046 11:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.046 11:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:09.046 11:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.046 11:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:09.046 11:49:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.046 [2024-07-21 11:49:07.840991] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:09.046 [2024-07-21 11:49:07.841254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76229 ] 00:07:09.305 [2024-07-21 11:49:08.003944] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.305 [2024-07-21 11:49:08.004149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.305 [2024-07-21 11:49:08.055217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.873 11:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:09.873 11:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:09.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:09.873 11:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=76240 00:07:09.873 11:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:09.873 11:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 76240 /var/tmp/spdk2.sock 00:07:09.873 11:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76240 ']' 00:07:09.873 11:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:09.873 11:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:09.873 11:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:09.873 11:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:09.873 11:49:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.132 [2024-07-21 11:49:08.738988] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:10.132 [2024-07-21 11:49:08.739199] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76240 ] 00:07:10.132 [2024-07-21 11:49:08.898067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.391 [2024-07-21 11:49:08.999242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.960 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:10.960 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:10.960 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 76240 00:07:10.960 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76240 00:07:10.960 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.960 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 76229 00:07:10.960 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76229 ']' 00:07:10.960 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 76229 00:07:10.960 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:11.220 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:11.220 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76229 00:07:11.220 killing process with pid 76229 00:07:11.220 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:11.220 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:11.220 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76229' 00:07:11.220 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 76229 00:07:11.220 11:49:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 76229 00:07:11.789 11:49:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 76240 00:07:11.789 11:49:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76240 ']' 00:07:11.789 11:49:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 76240 00:07:11.789 11:49:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:11.789 11:49:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:11.789 11:49:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76240 00:07:12.048 11:49:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:12.048 11:49:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:12.048 killing process with pid 76240 00:07:12.048 11:49:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76240' 00:07:12.048 11:49:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 76240 00:07:12.048 11:49:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 76240 00:07:12.308 00:07:12.308 real 0m3.296s 00:07:12.308 user 0m3.382s 00:07:12.308 sys 0m0.996s 00:07:12.308 ************************************ 00:07:12.308 END TEST locking_app_on_unlocked_coremask 00:07:12.308 ************************************ 00:07:12.308 11:49:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:12.308 11:49:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.308 11:49:11 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:12.308 11:49:11 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:12.308 11:49:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:12.308 11:49:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.308 ************************************ 00:07:12.308 START TEST locking_app_on_locked_coremask 00:07:12.308 ************************************ 00:07:12.308 11:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:07:12.308 11:49:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=76304 00:07:12.308 11:49:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 76304 /var/tmp/spdk.sock 00:07:12.308 11:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76304 ']' 00:07:12.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.308 11:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.308 11:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:12.308 11:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.308 11:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:12.308 11:49:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.308 11:49:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.566 [2024-07-21 11:49:11.208635] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:12.566 [2024-07-21 11:49:11.208784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76304 ] 00:07:12.566 [2024-07-21 11:49:11.391847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.825 [2024-07-21 11:49:11.446982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=76320 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 76320 /var/tmp/spdk2.sock 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76320 /var/tmp/spdk2.sock 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76320 /var/tmp/spdk2.sock 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 76320 ']' 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:13.398 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:13.398 [2024-07-21 11:49:12.111921] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:13.398 [2024-07-21 11:49:12.112160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76320 ] 00:07:13.656 [2024-07-21 11:49:12.271581] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 76304 has claimed it. 00:07:13.656 [2024-07-21 11:49:12.271669] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:13.915 ERROR: process (pid: 76320) is no longer running 00:07:13.915 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76320) - No such process 00:07:13.915 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:13.915 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:13.915 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:13.915 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:13.915 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:13.915 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:13.915 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 76304 00:07:13.915 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 76304 00:07:13.915 11:49:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:14.481 11:49:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 76304 00:07:14.481 11:49:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 76304 ']' 00:07:14.481 11:49:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 76304 00:07:14.481 11:49:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:07:14.481 11:49:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:14.481 11:49:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76304 00:07:14.481 killing process with pid 76304 00:07:14.481 11:49:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:14.481 11:49:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:14.481 11:49:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76304' 00:07:14.481 11:49:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 76304 00:07:14.481 11:49:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 76304 00:07:14.740 00:07:14.740 real 0m2.493s 00:07:14.740 user 0m2.680s 00:07:14.740 sys 0m0.736s 00:07:14.740 11:49:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:14.740 ************************************ 00:07:14.740 END TEST locking_app_on_locked_coremask 00:07:14.740 ************************************ 00:07:14.740 11:49:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.999 11:49:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:14.999 11:49:13 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:14.999 11:49:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:14.999 11:49:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.999 ************************************ 00:07:14.999 START TEST locking_overlapped_coremask 00:07:14.999 ************************************ 00:07:14.999 11:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:07:14.999 11:49:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=76362 00:07:14.999 11:49:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:14.999 11:49:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 76362 /var/tmp/spdk.sock 00:07:14.999 11:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 76362 ']' 00:07:14.999 11:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.999 11:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:14.999 11:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.999 11:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:14.999 11:49:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.999 [2024-07-21 11:49:13.757334] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:14.999 [2024-07-21 11:49:13.757453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76362 ] 00:07:15.258 [2024-07-21 11:49:13.908302] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.258 [2024-07-21 11:49:13.958352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.258 [2024-07-21 11:49:13.958462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.258 [2024-07-21 11:49:13.958586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=76380 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 76380 /var/tmp/spdk2.sock 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 76380 /var/tmp/spdk2.sock 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 76380 /var/tmp/spdk2.sock 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 76380 ']' 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:15.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:15.827 11:49:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.827 [2024-07-21 11:49:14.643730] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:15.827 [2024-07-21 11:49:14.643952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76380 ] 00:07:16.086 [2024-07-21 11:49:14.798686] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76362 has claimed it. 00:07:16.086 [2024-07-21 11:49:14.798756] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:16.654 ERROR: process (pid: 76380) is no longer running 00:07:16.654 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 842: kill: (76380) - No such process 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 76362 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 76362 ']' 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 76362 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76362 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76362' 00:07:16.654 killing process with pid 76362 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 76362 00:07:16.654 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 76362 00:07:16.966 00:07:16.966 real 0m2.025s 00:07:16.966 user 0m5.307s 00:07:16.966 sys 0m0.525s 00:07:16.966 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:16.966 ************************************ 00:07:16.966 END TEST locking_overlapped_coremask 00:07:16.966 ************************************ 00:07:16.966 11:49:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.966 11:49:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:16.966 11:49:15 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:16.966 11:49:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:16.966 11:49:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.966 ************************************ 00:07:16.966 START TEST locking_overlapped_coremask_via_rpc 00:07:16.966 ************************************ 00:07:16.966 11:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:07:16.966 11:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=76422 00:07:16.966 11:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 76422 /var/tmp/spdk.sock 00:07:16.966 11:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:16.966 11:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76422 ']' 00:07:16.966 11:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.966 11:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:16.966 11:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.966 11:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:16.966 11:49:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.225 [2024-07-21 11:49:15.842501] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:17.225 [2024-07-21 11:49:15.842721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76422 ] 00:07:17.225 [2024-07-21 11:49:15.997007] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:17.225 [2024-07-21 11:49:15.997201] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.225 [2024-07-21 11:49:16.051385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.225 [2024-07-21 11:49:16.051497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.225 [2024-07-21 11:49:16.051280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.791 11:49:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:17.791 11:49:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:17.791 11:49:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=76440 00:07:17.791 11:49:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:17.791 11:49:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 76440 /var/tmp/spdk2.sock 00:07:17.791 11:49:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76440 ']' 00:07:17.791 11:49:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.791 11:49:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:17.791 11:49:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.791 11:49:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:17.791 11:49:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.048 [2024-07-21 11:49:16.742401] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:18.048 [2024-07-21 11:49:16.742602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76440 ] 00:07:18.048 [2024-07-21 11:49:16.898457] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:18.048 [2024-07-21 11:49:16.898544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:18.306 [2024-07-21 11:49:17.012420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.306 [2024-07-21 11:49:17.012609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:18.306 [2024-07-21 11:49:17.015838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.872 [2024-07-21 11:49:17.579207] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 76422 has claimed it. 00:07:18.872 request: 00:07:18.872 { 00:07:18.872 "method": "framework_enable_cpumask_locks", 00:07:18.872 "req_id": 1 00:07:18.872 } 00:07:18.872 Got JSON-RPC error response 00:07:18.872 response: 00:07:18.872 { 00:07:18.872 "code": -32603, 00:07:18.872 "message": "Failed to claim CPU core: 2" 00:07:18.872 } 00:07:18.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:18.872 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 76422 /var/tmp/spdk.sock 00:07:18.873 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76422 ']' 00:07:18.873 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.873 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:18.873 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.873 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:18.873 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:19.131 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:19.131 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:19.131 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 76440 /var/tmp/spdk2.sock 00:07:19.131 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 76440 ']' 00:07:19.131 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:19.131 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:19.131 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:19.131 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:19.131 11:49:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.393 11:49:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:19.393 11:49:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:19.393 11:49:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:19.393 11:49:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:19.393 11:49:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:19.393 11:49:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:19.393 00:07:19.393 real 0m2.263s 00:07:19.393 user 0m1.034s 00:07:19.393 sys 0m0.158s 00:07:19.393 11:49:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:19.393 11:49:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:19.393 ************************************ 00:07:19.393 END TEST locking_overlapped_coremask_via_rpc 00:07:19.393 ************************************ 00:07:19.393 11:49:18 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:19.393 11:49:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76422 ]] 00:07:19.393 11:49:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76422 00:07:19.393 11:49:18 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76422 ']' 00:07:19.393 11:49:18 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76422 00:07:19.393 11:49:18 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:19.393 11:49:18 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:19.393 11:49:18 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76422 00:07:19.393 11:49:18 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:19.393 killing process with pid 76422 00:07:19.393 11:49:18 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:19.393 11:49:18 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76422' 00:07:19.393 11:49:18 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 76422 00:07:19.393 11:49:18 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 76422 00:07:19.655 11:49:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76440 ]] 00:07:19.655 11:49:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76440 00:07:19.655 11:49:18 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76440 ']' 00:07:19.655 11:49:18 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76440 00:07:19.655 11:49:18 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:07:19.655 11:49:18 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:19.655 11:49:18 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76440 00:07:19.655 killing process with pid 76440 00:07:19.655 11:49:18 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:07:19.655 11:49:18 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:07:19.655 11:49:18 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76440' 00:07:19.655 11:49:18 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 76440 00:07:19.655 11:49:18 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 76440 00:07:20.241 11:49:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.241 Process with pid 76422 is not found 00:07:20.241 11:49:18 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:20.241 11:49:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 76422 ]] 00:07:20.241 11:49:18 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 76422 00:07:20.241 11:49:18 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76422 ']' 00:07:20.241 11:49:18 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76422 00:07:20.241 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (76422) - No such process 00:07:20.241 11:49:18 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 76422 is not found' 00:07:20.241 11:49:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 76440 ]] 00:07:20.241 11:49:18 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 76440 00:07:20.241 11:49:18 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 76440 ']' 00:07:20.241 Process with pid 76440 is not found 00:07:20.241 11:49:18 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 76440 00:07:20.241 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (76440) - No such process 00:07:20.241 11:49:18 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 76440 is not found' 00:07:20.241 11:49:18 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:20.241 ************************************ 00:07:20.241 END TEST cpu_locks 00:07:20.241 ************************************ 00:07:20.241 00:07:20.241 real 0m18.354s 00:07:20.241 user 0m30.287s 00:07:20.241 sys 0m5.674s 00:07:20.241 11:49:18 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.241 11:49:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.241 ************************************ 00:07:20.241 END TEST event 00:07:20.241 ************************************ 00:07:20.241 00:07:20.241 real 0m46.091s 00:07:20.241 user 1m27.521s 00:07:20.241 sys 0m9.206s 00:07:20.241 11:49:18 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:20.241 11:49:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:20.241 11:49:19 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:20.241 11:49:19 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:20.241 11:49:19 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.241 11:49:19 -- common/autotest_common.sh@10 -- # set +x 00:07:20.241 ************************************ 00:07:20.241 START TEST thread 00:07:20.241 ************************************ 00:07:20.241 11:49:19 thread -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:20.501 * Looking for test storage... 00:07:20.501 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:20.501 11:49:19 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.501 11:49:19 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:20.501 11:49:19 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:20.501 11:49:19 thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.501 ************************************ 00:07:20.501 START TEST thread_poller_perf 00:07:20.501 ************************************ 00:07:20.501 11:49:19 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:20.501 [2024-07-21 11:49:19.197483] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:20.501 [2024-07-21 11:49:19.197705] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76565 ] 00:07:20.501 [2024-07-21 11:49:19.364353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.761 [2024-07-21 11:49:19.424426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.761 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:21.699 ====================================== 00:07:21.699 busy:2299708864 (cyc) 00:07:21.699 total_run_count: 401000 00:07:21.699 tsc_hz: 2290000000 (cyc) 00:07:21.699 ====================================== 00:07:21.699 poller_cost: 5734 (cyc), 2503 (nsec) 00:07:21.699 00:07:21.699 ************************************ 00:07:21.699 END TEST thread_poller_perf 00:07:21.699 ************************************ 00:07:21.699 real 0m1.373s 00:07:21.699 user 0m1.162s 00:07:21.699 sys 0m0.104s 00:07:21.699 11:49:20 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.699 11:49:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:21.959 11:49:20 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:21.959 11:49:20 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:21.959 11:49:20 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.959 11:49:20 thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.959 ************************************ 00:07:21.959 START TEST thread_poller_perf 00:07:21.959 ************************************ 00:07:21.959 11:49:20 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:21.959 [2024-07-21 11:49:20.630711] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:21.959 [2024-07-21 11:49:20.630941] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76601 ] 00:07:21.959 [2024-07-21 11:49:20.779290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.219 [2024-07-21 11:49:20.825897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.219 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:23.171 ====================================== 00:07:23.171 busy:2293218178 (cyc) 00:07:23.171 total_run_count: 5168000 00:07:23.171 tsc_hz: 2290000000 (cyc) 00:07:23.171 ====================================== 00:07:23.171 poller_cost: 443 (cyc), 193 (nsec) 00:07:23.171 00:07:23.171 real 0m1.331s 00:07:23.171 user 0m1.134s 00:07:23.171 sys 0m0.090s 00:07:23.171 ************************************ 00:07:23.171 END TEST thread_poller_perf 00:07:23.171 ************************************ 00:07:23.171 11:49:21 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.171 11:49:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:23.171 11:49:21 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:23.171 00:07:23.171 real 0m2.951s 00:07:23.171 user 0m2.371s 00:07:23.171 sys 0m0.372s 00:07:23.171 ************************************ 00:07:23.171 END TEST thread 00:07:23.171 ************************************ 00:07:23.171 11:49:21 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:23.171 11:49:21 thread -- common/autotest_common.sh@10 -- # set +x 00:07:23.171 11:49:22 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:23.171 11:49:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:23.171 11:49:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:23.171 11:49:22 -- common/autotest_common.sh@10 -- # set +x 00:07:23.431 ************************************ 00:07:23.431 START TEST accel 00:07:23.431 ************************************ 00:07:23.431 11:49:22 accel -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:23.431 * Looking for test storage... 00:07:23.431 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:23.431 11:49:22 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:23.431 11:49:22 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:23.431 11:49:22 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:23.431 11:49:22 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=76677 00:07:23.431 11:49:22 accel -- accel/accel.sh@63 -- # waitforlisten 76677 00:07:23.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.431 11:49:22 accel -- common/autotest_common.sh@827 -- # '[' -z 76677 ']' 00:07:23.431 11:49:22 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.431 11:49:22 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:23.431 11:49:22 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:23.431 11:49:22 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:23.431 11:49:22 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.431 11:49:22 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:23.431 11:49:22 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.431 11:49:22 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.431 11:49:22 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.431 11:49:22 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.431 11:49:22 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.431 11:49:22 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.431 11:49:22 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:23.431 11:49:22 accel -- accel/accel.sh@41 -- # jq -r . 00:07:23.431 [2024-07-21 11:49:22.258467] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:23.431 [2024-07-21 11:49:22.258581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76677 ] 00:07:23.689 [2024-07-21 11:49:22.409741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.689 [2024-07-21 11:49:22.458480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.256 11:49:23 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:24.256 11:49:23 accel -- common/autotest_common.sh@860 -- # return 0 00:07:24.256 11:49:23 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:24.256 11:49:23 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:24.256 11:49:23 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:24.256 11:49:23 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:24.256 11:49:23 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:24.256 11:49:23 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:24.256 11:49:23 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:24.256 11:49:23 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:24.256 11:49:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.256 11:49:23 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.256 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.256 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.256 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.256 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.256 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.256 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.256 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.256 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.256 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.256 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.256 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.256 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.256 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.256 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.256 11:49:23 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:24.256 11:49:23 accel -- accel/accel.sh@72 -- # IFS== 00:07:24.257 11:49:23 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:24.257 11:49:23 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:24.257 11:49:23 accel -- accel/accel.sh@75 -- # killprocess 76677 00:07:24.257 11:49:23 accel -- common/autotest_common.sh@946 -- # '[' -z 76677 ']' 00:07:24.257 11:49:23 accel -- common/autotest_common.sh@950 -- # kill -0 76677 00:07:24.257 11:49:23 accel -- common/autotest_common.sh@951 -- # uname 00:07:24.257 11:49:23 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:24.257 11:49:23 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 76677 00:07:24.257 killing process with pid 76677 00:07:24.257 11:49:23 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:24.257 11:49:23 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:24.257 11:49:23 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 76677' 00:07:24.257 11:49:23 accel -- common/autotest_common.sh@965 -- # kill 76677 00:07:24.257 11:49:23 accel -- common/autotest_common.sh@970 -- # wait 76677 00:07:24.887 11:49:23 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:24.887 11:49:23 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:24.887 11:49:23 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:24.887 11:49:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.887 11:49:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.887 11:49:23 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:07:24.887 11:49:23 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:24.887 11:49:23 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:24.887 11:49:23 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.887 11:49:23 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.887 11:49:23 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.887 11:49:23 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.887 11:49:23 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.887 11:49:23 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:24.887 11:49:23 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:24.887 11:49:23 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:24.887 11:49:23 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:24.887 11:49:23 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:24.887 11:49:23 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:24.887 11:49:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:24.887 11:49:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.887 ************************************ 00:07:24.887 START TEST accel_missing_filename 00:07:24.887 ************************************ 00:07:24.887 11:49:23 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:07:24.887 11:49:23 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:24.887 11:49:23 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:24.887 11:49:23 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:24.887 11:49:23 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.887 11:49:23 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:24.887 11:49:23 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:24.887 11:49:23 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:24.887 11:49:23 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:24.887 11:49:23 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:24.887 11:49:23 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.887 11:49:23 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.887 11:49:23 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.887 11:49:23 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.887 11:49:23 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.887 11:49:23 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:24.887 11:49:23 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:24.887 [2024-07-21 11:49:23.665096] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:24.887 [2024-07-21 11:49:23.665258] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76730 ] 00:07:25.161 [2024-07-21 11:49:23.809067] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.161 [2024-07-21 11:49:23.857121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.161 [2024-07-21 11:49:23.900293] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.161 [2024-07-21 11:49:23.965370] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:25.419 A filename is required. 00:07:25.419 11:49:24 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:25.419 11:49:24 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:25.419 11:49:24 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:25.419 11:49:24 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:25.419 11:49:24 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:25.419 11:49:24 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:25.419 00:07:25.419 real 0m0.445s 00:07:25.419 user 0m0.243s 00:07:25.419 sys 0m0.140s 00:07:25.419 11:49:24 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.419 11:49:24 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:25.419 ************************************ 00:07:25.419 END TEST accel_missing_filename 00:07:25.419 ************************************ 00:07:25.420 11:49:24 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.420 11:49:24 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:25.420 11:49:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.420 11:49:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.420 ************************************ 00:07:25.420 START TEST accel_compress_verify 00:07:25.420 ************************************ 00:07:25.420 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.420 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:25.420 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.420 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:25.420 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.420 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:25.420 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.420 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.420 11:49:24 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.420 11:49:24 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:25.420 11:49:24 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.420 11:49:24 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.420 11:49:24 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.420 11:49:24 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.420 11:49:24 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.420 11:49:24 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:25.420 11:49:24 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:25.420 [2024-07-21 11:49:24.178372] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:25.420 [2024-07-21 11:49:24.178474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76750 ] 00:07:25.678 [2024-07-21 11:49:24.339466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.678 [2024-07-21 11:49:24.383535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.678 [2024-07-21 11:49:24.426809] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:25.678 [2024-07-21 11:49:24.491330] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:25.937 00:07:25.937 Compression does not support the verify option, aborting. 00:07:25.937 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:25.937 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:25.937 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:25.937 ************************************ 00:07:25.937 END TEST accel_compress_verify 00:07:25.937 ************************************ 00:07:25.937 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:25.937 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:25.937 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:25.937 00:07:25.937 real 0m0.455s 00:07:25.937 user 0m0.266s 00:07:25.937 sys 0m0.134s 00:07:25.937 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.937 11:49:24 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:25.937 11:49:24 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:25.937 11:49:24 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:25.937 11:49:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.937 11:49:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.937 ************************************ 00:07:25.937 START TEST accel_wrong_workload 00:07:25.937 ************************************ 00:07:25.937 11:49:24 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:07:25.937 11:49:24 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:25.937 11:49:24 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:25.937 11:49:24 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:25.937 11:49:24 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.937 11:49:24 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:25.937 11:49:24 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.937 11:49:24 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:25.937 11:49:24 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:25.937 11:49:24 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:25.937 11:49:24 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.937 11:49:24 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.937 11:49:24 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.937 11:49:24 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.937 11:49:24 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.937 11:49:24 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:25.937 11:49:24 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:25.937 Unsupported workload type: foobar 00:07:25.937 [2024-07-21 11:49:24.685531] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:25.937 accel_perf options: 00:07:25.937 [-h help message] 00:07:25.937 [-q queue depth per core] 00:07:25.937 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:25.937 [-T number of threads per core 00:07:25.937 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:25.937 [-t time in seconds] 00:07:25.937 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:25.937 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:25.937 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:25.937 [-l for compress/decompress workloads, name of uncompressed input file 00:07:25.937 [-S for crc32c workload, use this seed value (default 0) 00:07:25.937 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:25.937 [-f for fill workload, use this BYTE value (default 255) 00:07:25.937 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:25.937 [-y verify result if this switch is on] 00:07:25.937 [-a tasks to allocate per core (default: same value as -q)] 00:07:25.937 Can be used to spread operations across a wider range of memory. 00:07:25.937 11:49:24 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:25.937 11:49:24 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:25.937 11:49:24 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:25.937 ************************************ 00:07:25.937 END TEST accel_wrong_workload 00:07:25.937 ************************************ 00:07:25.937 11:49:24 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:25.937 00:07:25.937 real 0m0.078s 00:07:25.937 user 0m0.074s 00:07:25.937 sys 0m0.048s 00:07:25.937 11:49:24 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:25.937 11:49:24 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:25.937 11:49:24 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:25.937 11:49:24 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:07:25.937 11:49:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:25.937 11:49:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.937 ************************************ 00:07:25.937 START TEST accel_negative_buffers 00:07:25.937 ************************************ 00:07:25.937 11:49:24 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:25.937 11:49:24 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:25.937 11:49:24 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:25.937 11:49:24 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:25.937 11:49:24 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.937 11:49:24 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:25.937 11:49:24 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:25.937 11:49:24 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:25.937 11:49:24 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:25.937 11:49:24 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:25.937 11:49:24 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.937 11:49:24 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.937 11:49:24 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.937 11:49:24 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.937 11:49:24 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.937 11:49:24 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:25.937 11:49:24 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:26.195 -x option must be non-negative. 00:07:26.195 [2024-07-21 11:49:24.828377] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:26.195 accel_perf options: 00:07:26.195 [-h help message] 00:07:26.195 [-q queue depth per core] 00:07:26.195 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:26.195 [-T number of threads per core 00:07:26.195 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:26.195 [-t time in seconds] 00:07:26.195 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:26.195 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:26.195 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:26.195 [-l for compress/decompress workloads, name of uncompressed input file 00:07:26.195 [-S for crc32c workload, use this seed value (default 0) 00:07:26.195 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:26.195 [-f for fill workload, use this BYTE value (default 255) 00:07:26.195 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:26.195 [-y verify result if this switch is on] 00:07:26.195 [-a tasks to allocate per core (default: same value as -q)] 00:07:26.195 Can be used to spread operations across a wider range of memory. 00:07:26.195 11:49:24 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:26.195 11:49:24 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:26.195 11:49:24 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:26.195 11:49:24 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:26.195 ************************************ 00:07:26.195 END TEST accel_negative_buffers 00:07:26.195 ************************************ 00:07:26.195 00:07:26.195 real 0m0.083s 00:07:26.195 user 0m0.089s 00:07:26.195 sys 0m0.037s 00:07:26.195 11:49:24 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:26.195 11:49:24 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:26.195 11:49:24 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:26.195 11:49:24 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:26.195 11:49:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:26.195 11:49:24 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.195 ************************************ 00:07:26.195 START TEST accel_crc32c 00:07:26.195 ************************************ 00:07:26.195 11:49:24 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:26.195 11:49:24 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:26.195 11:49:24 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:26.195 11:49:24 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.195 11:49:24 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:26.195 11:49:24 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.195 11:49:24 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:26.195 11:49:24 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:26.195 11:49:24 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.195 11:49:24 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.195 11:49:24 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.195 11:49:24 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.195 11:49:24 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.195 11:49:24 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:26.195 11:49:24 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:26.195 [2024-07-21 11:49:24.974602] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:26.195 [2024-07-21 11:49:24.974927] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76817 ] 00:07:26.453 [2024-07-21 11:49:25.141584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.453 [2024-07-21 11:49:25.190592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:26.453 11:49:25 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:27.827 11:49:26 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.827 00:07:27.827 real 0m1.487s 00:07:27.827 user 0m0.025s 00:07:27.827 sys 0m0.002s 00:07:27.827 11:49:26 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:27.827 11:49:26 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:27.827 ************************************ 00:07:27.827 END TEST accel_crc32c 00:07:27.827 ************************************ 00:07:27.827 11:49:26 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:27.827 11:49:26 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:27.827 11:49:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:27.827 11:49:26 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.827 ************************************ 00:07:27.827 START TEST accel_crc32c_C2 00:07:27.827 ************************************ 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:27.827 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:27.827 [2024-07-21 11:49:26.523716] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:27.827 [2024-07-21 11:49:26.523948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76853 ] 00:07:27.827 [2024-07-21 11:49:26.682078] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.085 [2024-07-21 11:49:26.743982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:28.085 11:49:26 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.461 00:07:29.461 real 0m1.487s 00:07:29.461 user 0m1.248s 00:07:29.461 sys 0m0.154s 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:29.461 11:49:27 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:29.461 ************************************ 00:07:29.461 END TEST accel_crc32c_C2 00:07:29.461 ************************************ 00:07:29.461 11:49:28 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:29.461 11:49:28 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:29.461 11:49:28 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:29.461 11:49:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.461 ************************************ 00:07:29.461 START TEST accel_copy 00:07:29.461 ************************************ 00:07:29.461 11:49:28 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:07:29.461 11:49:28 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:29.461 11:49:28 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:29.461 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.461 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.461 11:49:28 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:29.461 11:49:28 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:29.461 11:49:28 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:29.461 11:49:28 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.461 11:49:28 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.461 11:49:28 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.461 11:49:28 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.461 11:49:28 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.461 11:49:28 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:29.461 11:49:28 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:29.461 [2024-07-21 11:49:28.079083] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:29.461 [2024-07-21 11:49:28.079204] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76888 ] 00:07:29.461 [2024-07-21 11:49:28.238760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.461 [2024-07-21 11:49:28.283997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:29.719 11:49:28 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:30.655 11:49:29 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.655 00:07:30.655 real 0m1.472s 00:07:30.655 user 0m1.231s 00:07:30.655 sys 0m0.156s 00:07:30.655 11:49:29 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.655 11:49:29 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:30.655 ************************************ 00:07:30.655 END TEST accel_copy 00:07:30.655 ************************************ 00:07:30.916 11:49:29 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.916 11:49:29 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:30.916 11:49:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.916 11:49:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.916 ************************************ 00:07:30.916 START TEST accel_fill 00:07:30.916 ************************************ 00:07:30.916 11:49:29 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.916 11:49:29 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:30.916 11:49:29 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:30.916 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:30.916 11:49:29 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.916 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:30.916 11:49:29 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:30.916 11:49:29 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:30.916 11:49:29 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.916 11:49:29 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.916 11:49:29 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.916 11:49:29 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.916 11:49:29 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.916 11:49:29 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:30.916 11:49:29 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:30.916 [2024-07-21 11:49:29.623537] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:30.916 [2024-07-21 11:49:29.623842] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76924 ] 00:07:31.174 [2024-07-21 11:49:29.798193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.174 [2024-07-21 11:49:29.843936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.174 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:31.175 11:49:29 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:31.175 11:49:29 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:31.175 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:31.175 11:49:29 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:32.549 11:49:31 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.549 00:07:32.549 real 0m1.491s 00:07:32.549 user 0m0.023s 00:07:32.549 sys 0m0.003s 00:07:32.549 ************************************ 00:07:32.549 END TEST accel_fill 00:07:32.549 ************************************ 00:07:32.549 11:49:31 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:32.549 11:49:31 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:32.549 11:49:31 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:32.549 11:49:31 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:32.549 11:49:31 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:32.549 11:49:31 accel -- common/autotest_common.sh@10 -- # set +x 00:07:32.549 ************************************ 00:07:32.549 START TEST accel_copy_crc32c 00:07:32.549 ************************************ 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:32.549 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:32.549 [2024-07-21 11:49:31.170930] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:32.549 [2024-07-21 11:49:31.171156] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76959 ] 00:07:32.549 [2024-07-21 11:49:31.322740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.549 [2024-07-21 11:49:31.371066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:32.808 11:49:31 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.744 00:07:33.744 real 0m1.463s 00:07:33.744 user 0m1.239s 00:07:33.744 sys 0m0.143s 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:33.744 11:49:32 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:33.744 ************************************ 00:07:33.744 END TEST accel_copy_crc32c 00:07:33.744 ************************************ 00:07:34.010 11:49:32 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:34.010 11:49:32 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:34.010 11:49:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:34.010 11:49:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.010 ************************************ 00:07:34.010 START TEST accel_copy_crc32c_C2 00:07:34.010 ************************************ 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:34.010 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:34.010 [2024-07-21 11:49:32.696123] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:34.010 [2024-07-21 11:49:32.696226] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76995 ] 00:07:34.010 [2024-07-21 11:49:32.848270] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.278 [2024-07-21 11:49:32.892302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:34.278 11:49:32 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.379 00:07:35.379 real 0m1.454s 00:07:35.379 user 0m0.025s 00:07:35.379 sys 0m0.003s 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:35.379 11:49:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:35.379 ************************************ 00:07:35.379 END TEST accel_copy_crc32c_C2 00:07:35.379 ************************************ 00:07:35.379 11:49:34 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:35.379 11:49:34 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:35.379 11:49:34 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:35.379 11:49:34 accel -- common/autotest_common.sh@10 -- # set +x 00:07:35.379 ************************************ 00:07:35.379 START TEST accel_dualcast 00:07:35.379 ************************************ 00:07:35.379 11:49:34 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:07:35.379 11:49:34 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:35.379 11:49:34 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:35.379 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.379 11:49:34 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:35.379 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.379 11:49:34 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:35.379 11:49:34 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:35.379 11:49:34 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:35.379 11:49:34 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:35.379 11:49:34 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.379 11:49:34 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.379 11:49:34 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:35.379 11:49:34 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:35.379 11:49:34 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:35.379 [2024-07-21 11:49:34.201799] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:35.379 [2024-07-21 11:49:34.201917] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77030 ] 00:07:35.638 [2024-07-21 11:49:34.348397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.638 [2024-07-21 11:49:34.391400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:35.638 11:49:34 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:37.014 11:49:35 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.014 00:07:37.014 real 0m1.446s 00:07:37.014 user 0m1.218s 00:07:37.014 sys 0m0.143s 00:07:37.014 ************************************ 00:07:37.014 END TEST accel_dualcast 00:07:37.014 ************************************ 00:07:37.014 11:49:35 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:37.014 11:49:35 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:37.014 11:49:35 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:37.014 11:49:35 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:37.014 11:49:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:37.014 11:49:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.014 ************************************ 00:07:37.014 START TEST accel_compare 00:07:37.014 ************************************ 00:07:37.014 11:49:35 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:07:37.014 11:49:35 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:37.014 11:49:35 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:37.014 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.014 11:49:35 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:37.014 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.014 11:49:35 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:37.014 11:49:35 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:37.014 11:49:35 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.014 11:49:35 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.014 11:49:35 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.014 11:49:35 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.014 11:49:35 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.014 11:49:35 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:37.014 11:49:35 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:37.014 [2024-07-21 11:49:35.703924] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:37.014 [2024-07-21 11:49:35.704075] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77066 ] 00:07:37.014 [2024-07-21 11:49:35.861627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.273 [2024-07-21 11:49:35.904947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.273 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.273 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.273 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.273 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.273 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.273 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:37.274 11:49:35 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:38.654 11:49:37 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.654 00:07:38.654 real 0m1.450s 00:07:38.654 user 0m1.233s 00:07:38.654 sys 0m0.131s 00:07:38.654 11:49:37 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.654 11:49:37 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:38.654 ************************************ 00:07:38.654 END TEST accel_compare 00:07:38.654 ************************************ 00:07:38.654 11:49:37 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:38.654 11:49:37 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:07:38.654 11:49:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.654 11:49:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.654 ************************************ 00:07:38.654 START TEST accel_xor 00:07:38.654 ************************************ 00:07:38.654 11:49:37 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:38.654 [2024-07-21 11:49:37.201375] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:38.654 [2024-07-21 11:49:37.201484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77096 ] 00:07:38.654 [2024-07-21 11:49:37.358649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.654 [2024-07-21 11:49:37.403631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.654 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:38.655 11:49:37 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.035 00:07:40.035 real 0m1.456s 00:07:40.035 user 0m1.233s 00:07:40.035 sys 0m0.137s 00:07:40.035 11:49:38 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:40.035 11:49:38 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:40.035 ************************************ 00:07:40.035 END TEST accel_xor 00:07:40.035 ************************************ 00:07:40.035 11:49:38 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:40.035 11:49:38 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:40.035 11:49:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:40.035 11:49:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.035 ************************************ 00:07:40.035 START TEST accel_xor 00:07:40.035 ************************************ 00:07:40.035 11:49:38 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:40.035 11:49:38 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:40.035 [2024-07-21 11:49:38.705649] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:40.035 [2024-07-21 11:49:38.705756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77137 ] 00:07:40.035 [2024-07-21 11:49:38.863484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.293 [2024-07-21 11:49:38.906961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.293 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:40.294 11:49:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:41.671 11:49:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.671 00:07:41.671 real 0m1.454s 00:07:41.671 user 0m1.220s 00:07:41.671 sys 0m0.150s 00:07:41.671 11:49:40 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:41.671 11:49:40 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:41.671 ************************************ 00:07:41.671 END TEST accel_xor 00:07:41.671 ************************************ 00:07:41.671 11:49:40 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:41.671 11:49:40 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:41.671 11:49:40 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:41.671 11:49:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.671 ************************************ 00:07:41.671 START TEST accel_dif_verify 00:07:41.671 ************************************ 00:07:41.671 11:49:40 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:07:41.671 11:49:40 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:41.671 11:49:40 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:41.671 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.671 11:49:40 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:41.671 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.671 11:49:40 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:41.671 11:49:40 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:41.671 11:49:40 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.671 11:49:40 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.671 11:49:40 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.671 11:49:40 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.671 11:49:40 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.671 11:49:40 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:41.671 11:49:40 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:41.671 [2024-07-21 11:49:40.222091] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:41.672 [2024-07-21 11:49:40.222251] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77167 ] 00:07:41.672 [2024-07-21 11:49:40.381372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.672 [2024-07-21 11:49:40.427916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:41.672 11:49:40 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:43.051 11:49:41 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.051 00:07:43.051 real 0m1.461s 00:07:43.051 user 0m1.242s 00:07:43.051 sys 0m0.133s 00:07:43.051 11:49:41 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:43.051 11:49:41 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:43.051 ************************************ 00:07:43.051 END TEST accel_dif_verify 00:07:43.051 ************************************ 00:07:43.051 11:49:41 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:43.051 11:49:41 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:43.051 11:49:41 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:43.051 11:49:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.051 ************************************ 00:07:43.051 START TEST accel_dif_generate 00:07:43.051 ************************************ 00:07:43.051 11:49:41 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:07:43.051 11:49:41 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:43.051 11:49:41 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:43.051 11:49:41 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:43.051 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.051 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.051 11:49:41 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:43.051 11:49:41 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:43.051 11:49:41 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.051 11:49:41 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.051 11:49:41 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.051 11:49:41 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.051 11:49:41 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.051 11:49:41 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:43.051 11:49:41 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:43.051 [2024-07-21 11:49:41.737429] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:43.051 [2024-07-21 11:49:41.737579] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77208 ] 00:07:43.051 [2024-07-21 11:49:41.886322] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.311 [2024-07-21 11:49:41.929211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:43.311 11:49:41 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:44.702 11:49:43 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.702 00:07:44.702 real 0m1.449s 00:07:44.702 user 0m1.234s 00:07:44.702 sys 0m0.132s 00:07:44.702 11:49:43 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:44.702 11:49:43 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:44.702 ************************************ 00:07:44.702 END TEST accel_dif_generate 00:07:44.702 ************************************ 00:07:44.702 11:49:43 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:44.702 11:49:43 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:07:44.702 11:49:43 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:44.702 11:49:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:44.702 ************************************ 00:07:44.702 START TEST accel_dif_generate_copy 00:07:44.702 ************************************ 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:44.702 [2024-07-21 11:49:43.221930] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:44.702 [2024-07-21 11:49:43.222080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77238 ] 00:07:44.702 [2024-07-21 11:49:43.392937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.702 [2024-07-21 11:49:43.442306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.702 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:44.703 11:49:43 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.079 00:07:46.079 real 0m1.466s 00:07:46.079 user 0m1.227s 00:07:46.079 sys 0m0.155s 00:07:46.079 ************************************ 00:07:46.079 END TEST accel_dif_generate_copy 00:07:46.079 ************************************ 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:46.079 11:49:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:46.079 11:49:44 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:46.079 11:49:44 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:46.079 11:49:44 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:07:46.079 11:49:44 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:46.079 11:49:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.079 ************************************ 00:07:46.079 START TEST accel_comp 00:07:46.079 ************************************ 00:07:46.079 11:49:44 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:46.079 11:49:44 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:46.079 11:49:44 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:46.079 11:49:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.079 11:49:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.079 11:49:44 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:46.079 11:49:44 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:46.079 11:49:44 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:46.079 11:49:44 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.079 11:49:44 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.079 11:49:44 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.079 11:49:44 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.079 11:49:44 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.079 11:49:44 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:46.079 11:49:44 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:46.079 [2024-07-21 11:49:44.751477] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:46.079 [2024-07-21 11:49:44.751588] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77274 ] 00:07:46.079 [2024-07-21 11:49:44.910442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.338 [2024-07-21 11:49:44.953863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.338 11:49:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.338 11:49:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:44 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:44 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:44 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.338 11:49:44 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:46.338 11:49:45 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:47.711 ************************************ 00:07:47.711 END TEST accel_comp 00:07:47.711 ************************************ 00:07:47.711 11:49:46 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.711 00:07:47.711 real 0m1.470s 00:07:47.711 user 0m1.243s 00:07:47.711 sys 0m0.144s 00:07:47.711 11:49:46 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:47.711 11:49:46 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:47.711 11:49:46 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:47.711 11:49:46 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:07:47.711 11:49:46 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:47.711 11:49:46 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.711 ************************************ 00:07:47.711 START TEST accel_decomp 00:07:47.711 ************************************ 00:07:47.711 11:49:46 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:47.711 11:49:46 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:47.711 11:49:46 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:47.711 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.711 11:49:46 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:47.712 [2024-07-21 11:49:46.279950] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:47.712 [2024-07-21 11:49:46.280072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77309 ] 00:07:47.712 [2024-07-21 11:49:46.446413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.712 [2024-07-21 11:49:46.496031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:47.712 11:49:46 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.086 ************************************ 00:07:49.086 END TEST accel_decomp 00:07:49.086 ************************************ 00:07:49.086 11:49:47 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.086 00:07:49.086 real 0m1.491s 00:07:49.086 user 0m1.249s 00:07:49.086 sys 0m0.160s 00:07:49.086 11:49:47 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:49.086 11:49:47 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:49.086 11:49:47 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:49.086 11:49:47 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:49.086 11:49:47 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:49.086 11:49:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.086 ************************************ 00:07:49.086 START TEST accel_decmop_full 00:07:49.086 ************************************ 00:07:49.086 11:49:47 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:49.086 11:49:47 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:07:49.086 11:49:47 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:07:49.086 11:49:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.086 11:49:47 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:49.086 11:49:47 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.086 11:49:47 accel.accel_decmop_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:49.086 11:49:47 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:07:49.086 11:49:47 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.086 11:49:47 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.086 11:49:47 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.086 11:49:47 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.086 11:49:47 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.086 11:49:47 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:07:49.086 11:49:47 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:07:49.086 [2024-07-21 11:49:47.829486] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:49.086 [2024-07-21 11:49:47.829617] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77345 ] 00:07:49.344 [2024-07-21 11:49:47.986931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.344 [2024-07-21 11:49:48.057184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.344 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:49.345 11:49:48 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:50.716 11:49:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.716 00:07:50.716 real 0m1.519s 00:07:50.716 user 0m1.266s 00:07:50.716 sys 0m0.169s 00:07:50.716 11:49:49 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:50.716 11:49:49 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:07:50.716 ************************************ 00:07:50.716 END TEST accel_decmop_full 00:07:50.716 ************************************ 00:07:50.716 11:49:49 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:50.716 11:49:49 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:50.716 11:49:49 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:50.716 11:49:49 accel -- common/autotest_common.sh@10 -- # set +x 00:07:50.716 ************************************ 00:07:50.716 START TEST accel_decomp_mcore 00:07:50.716 ************************************ 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:50.716 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:50.716 [2024-07-21 11:49:49.407127] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:50.716 [2024-07-21 11:49:49.407283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77380 ] 00:07:50.716 [2024-07-21 11:49:49.574600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.973 [2024-07-21 11:49:49.631093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.973 [2024-07-21 11:49:49.631248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.973 [2024-07-21 11:49:49.631330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.973 [2024-07-21 11:49:49.631218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.973 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:50.974 11:49:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.345 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:52.346 00:07:52.346 real 0m1.508s 00:07:52.346 user 0m0.024s 00:07:52.346 sys 0m0.004s 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:52.346 11:49:50 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:52.346 ************************************ 00:07:52.346 END TEST accel_decomp_mcore 00:07:52.346 ************************************ 00:07:52.346 11:49:50 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.346 11:49:50 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:52.346 11:49:50 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:52.346 11:49:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:52.346 ************************************ 00:07:52.346 START TEST accel_decomp_full_mcore 00:07:52.346 ************************************ 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:52.346 11:49:50 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:52.346 [2024-07-21 11:49:50.973252] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:52.346 [2024-07-21 11:49:50.973421] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77419 ] 00:07:52.346 [2024-07-21 11:49:51.139904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.346 [2024-07-21 11:49:51.197747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.346 [2024-07-21 11:49:51.198087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.346 [2024-07-21 11:49:51.198196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.346 [2024-07-21 11:49:51.198102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:52.605 11:49:51 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.977 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.977 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.977 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.977 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.978 00:07:53.978 real 0m1.519s 00:07:53.978 user 0m0.026s 00:07:53.978 sys 0m0.004s 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:53.978 11:49:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:53.978 ************************************ 00:07:53.978 END TEST accel_decomp_full_mcore 00:07:53.978 ************************************ 00:07:53.978 11:49:52 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:53.978 11:49:52 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:07:53.978 11:49:52 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:53.978 11:49:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:53.978 ************************************ 00:07:53.978 START TEST accel_decomp_mthread 00:07:53.978 ************************************ 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:53.978 [2024-07-21 11:49:52.543969] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:53.978 [2024-07-21 11:49:52.544146] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77457 ] 00:07:53.978 [2024-07-21 11:49:52.709017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.978 [2024-07-21 11:49:52.764002] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:53.978 11:49:52 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:55.353 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:55.354 11:49:53 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:55.354 00:07:55.354 real 0m1.496s 00:07:55.354 user 0m1.267s 00:07:55.354 sys 0m0.145s 00:07:55.354 11:49:53 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:55.354 11:49:53 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:55.354 ************************************ 00:07:55.354 END TEST accel_decomp_mthread 00:07:55.354 ************************************ 00:07:55.354 11:49:54 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.354 11:49:54 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:07:55.354 11:49:54 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:55.354 11:49:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:55.354 ************************************ 00:07:55.354 START TEST accel_decomp_full_mthread 00:07:55.354 ************************************ 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:55.354 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:55.354 [2024-07-21 11:49:54.110428] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:55.354 [2024-07-21 11:49:54.110629] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77493 ] 00:07:55.612 [2024-07-21 11:49:54.276744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.612 [2024-07-21 11:49:54.325120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:55.612 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:55.613 11:49:54 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.986 00:07:56.986 real 0m1.525s 00:07:56.986 user 0m1.289s 00:07:56.986 sys 0m0.151s 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:56.986 11:49:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:56.986 ************************************ 00:07:56.986 END TEST accel_decomp_full_mthread 00:07:56.986 ************************************ 00:07:56.986 11:49:55 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:56.986 11:49:55 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:56.986 11:49:55 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:56.986 11:49:55 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:56.986 11:49:55 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:07:56.986 11:49:55 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:56.986 11:49:55 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.986 11:49:55 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:56.986 11:49:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:56.986 11:49:55 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.986 11:49:55 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:56.986 11:49:55 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:56.986 11:49:55 accel -- accel/accel.sh@41 -- # jq -r . 00:07:56.986 ************************************ 00:07:56.986 START TEST accel_dif_functional_tests 00:07:56.986 ************************************ 00:07:56.986 11:49:55 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:56.986 [2024-07-21 11:49:55.732795] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:56.986 [2024-07-21 11:49:55.732954] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77529 ] 00:07:57.244 [2024-07-21 11:49:55.898053] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.244 [2024-07-21 11:49:55.955057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.244 [2024-07-21 11:49:55.955149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.244 [2024-07-21 11:49:55.955261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.244 00:07:57.244 00:07:57.244 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.244 http://cunit.sourceforge.net/ 00:07:57.244 00:07:57.244 00:07:57.244 Suite: accel_dif 00:07:57.244 Test: verify: DIF generated, GUARD check ...passed 00:07:57.244 Test: verify: DIF generated, APPTAG check ...passed 00:07:57.244 Test: verify: DIF generated, REFTAG check ...passed 00:07:57.244 Test: verify: DIF not generated, GUARD check ...[2024-07-21 11:49:56.027354] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:57.244 passed 00:07:57.244 Test: verify: DIF not generated, APPTAG check ...[2024-07-21 11:49:56.027547] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:57.244 passed 00:07:57.244 Test: verify: DIF not generated, REFTAG check ...[2024-07-21 11:49:56.027669] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:57.244 passed 00:07:57.244 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:57.244 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-21 11:49:56.027928] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:57.244 passed 00:07:57.244 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:57.244 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:57.244 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:57.244 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-21 11:49:56.028282] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:57.244 passed 00:07:57.244 Test: verify copy: DIF generated, GUARD check ...passed 00:07:57.244 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:57.244 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:57.244 Test: verify copy: DIF not generated, GUARD check ...[2024-07-21 11:49:56.028697] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:57.244 passed 00:07:57.244 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-21 11:49:56.028852] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:57.244 passed 00:07:57.244 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-21 11:49:56.029011] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:57.244 passed 00:07:57.244 Test: generate copy: DIF generated, GUARD check ...passed 00:07:57.244 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:57.244 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:57.244 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:57.244 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:57.244 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:57.244 Test: generate copy: iovecs-len validate ...[2024-07-21 11:49:56.029683] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:57.244 passed 00:07:57.244 Test: generate copy: buffer alignment validate ...passed 00:07:57.244 00:07:57.244 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.244 suites 1 1 n/a 0 0 00:07:57.244 tests 26 26 26 0 0 00:07:57.244 asserts 115 115 115 0 n/a 00:07:57.244 00:07:57.244 Elapsed time = 0.006 seconds 00:07:57.501 00:07:57.501 real 0m0.610s 00:07:57.501 user 0m0.688s 00:07:57.501 sys 0m0.207s 00:07:57.501 11:49:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.501 11:49:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:57.501 ************************************ 00:07:57.501 END TEST accel_dif_functional_tests 00:07:57.501 ************************************ 00:07:57.501 00:07:57.501 real 0m34.267s 00:07:57.502 user 0m34.861s 00:07:57.502 sys 0m5.100s 00:07:57.502 11:49:56 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:57.502 11:49:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.502 ************************************ 00:07:57.502 END TEST accel 00:07:57.502 ************************************ 00:07:57.502 11:49:56 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:57.502 11:49:56 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:57.502 11:49:56 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:57.502 11:49:56 -- common/autotest_common.sh@10 -- # set +x 00:07:57.759 ************************************ 00:07:57.759 START TEST accel_rpc 00:07:57.759 ************************************ 00:07:57.759 11:49:56 accel_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:57.759 * Looking for test storage... 00:07:57.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:57.759 11:49:56 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:57.759 11:49:56 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=77596 00:07:57.759 11:49:56 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 77596 00:07:57.759 11:49:56 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:57.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.759 11:49:56 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 77596 ']' 00:07:57.759 11:49:56 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.759 11:49:56 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:57.759 11:49:56 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.759 11:49:56 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:57.759 11:49:56 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.759 [2024-07-21 11:49:56.569770] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:57.759 [2024-07-21 11:49:56.569997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77596 ] 00:07:58.017 [2024-07-21 11:49:56.718460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.017 [2024-07-21 11:49:56.787408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.584 11:49:57 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:58.584 11:49:57 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:07:58.584 11:49:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:58.584 11:49:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:58.584 11:49:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:58.584 11:49:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:58.584 11:49:57 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:58.584 11:49:57 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:58.584 11:49:57 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:58.584 11:49:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:58.584 ************************************ 00:07:58.584 START TEST accel_assign_opcode 00:07:58.584 ************************************ 00:07:58.584 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:07:58.584 11:49:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:58.584 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.584 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.584 [2024-07-21 11:49:57.423298] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:58.584 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.584 11:49:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:58.584 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.584 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.584 [2024-07-21 11:49:57.435259] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:58.584 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.584 11:49:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:58.584 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.584 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.842 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.842 11:49:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:58.842 11:49:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:58.842 11:49:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:58.842 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.842 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:58.842 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.842 software 00:07:58.842 ************************************ 00:07:58.842 END TEST accel_assign_opcode 00:07:58.842 ************************************ 00:07:58.842 00:07:58.842 real 0m0.262s 00:07:58.842 user 0m0.056s 00:07:58.842 sys 0m0.014s 00:07:58.842 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:58.842 11:49:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:59.098 11:49:57 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 77596 00:07:59.098 11:49:57 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 77596 ']' 00:07:59.098 11:49:57 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 77596 00:07:59.098 11:49:57 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:07:59.098 11:49:57 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:59.098 11:49:57 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77596 00:07:59.098 killing process with pid 77596 00:07:59.098 11:49:57 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:59.098 11:49:57 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:59.098 11:49:57 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77596' 00:07:59.098 11:49:57 accel_rpc -- common/autotest_common.sh@965 -- # kill 77596 00:07:59.098 11:49:57 accel_rpc -- common/autotest_common.sh@970 -- # wait 77596 00:07:59.356 00:07:59.356 real 0m1.787s 00:07:59.356 user 0m1.746s 00:07:59.356 sys 0m0.512s 00:07:59.356 11:49:58 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:59.356 11:49:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:59.356 ************************************ 00:07:59.356 END TEST accel_rpc 00:07:59.356 ************************************ 00:07:59.356 11:49:58 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:59.356 11:49:58 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:07:59.356 11:49:58 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:59.356 11:49:58 -- common/autotest_common.sh@10 -- # set +x 00:07:59.356 ************************************ 00:07:59.356 START TEST app_cmdline 00:07:59.356 ************************************ 00:07:59.356 11:49:58 app_cmdline -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:59.614 * Looking for test storage... 00:07:59.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:59.614 11:49:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:59.614 11:49:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=77689 00:07:59.614 11:49:58 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:59.614 11:49:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 77689 00:07:59.614 11:49:58 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 77689 ']' 00:07:59.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.614 11:49:58 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.614 11:49:58 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:59.614 11:49:58 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.614 11:49:58 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:59.614 11:49:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:59.614 [2024-07-21 11:49:58.428905] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:07:59.614 [2024-07-21 11:49:58.429025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77689 ] 00:07:59.871 [2024-07-21 11:49:58.594141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.871 [2024-07-21 11:49:58.648088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:08:00.804 11:49:59 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:00.804 { 00:08:00.804 "version": "SPDK v24.05.1-pre git sha1 5fa2f5086", 00:08:00.804 "fields": { 00:08:00.804 "major": 24, 00:08:00.804 "minor": 5, 00:08:00.804 "patch": 1, 00:08:00.804 "suffix": "-pre", 00:08:00.804 "commit": "5fa2f5086" 00:08:00.804 } 00:08:00.804 } 00:08:00.804 11:49:59 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:00.804 11:49:59 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:00.804 11:49:59 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:00.804 11:49:59 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:00.804 11:49:59 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.804 11:49:59 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:00.804 11:49:59 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.804 11:49:59 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:00.804 11:49:59 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:00.804 11:49:59 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:00.804 11:49:59 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:01.062 request: 00:08:01.062 { 00:08:01.062 "method": "env_dpdk_get_mem_stats", 00:08:01.062 "req_id": 1 00:08:01.062 } 00:08:01.062 Got JSON-RPC error response 00:08:01.062 response: 00:08:01.062 { 00:08:01.062 "code": -32601, 00:08:01.062 "message": "Method not found" 00:08:01.062 } 00:08:01.062 11:49:59 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:08:01.062 11:49:59 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:01.062 11:49:59 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:01.062 11:49:59 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:01.062 11:49:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 77689 00:08:01.062 11:49:59 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 77689 ']' 00:08:01.062 11:49:59 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 77689 00:08:01.062 11:49:59 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:08:01.062 11:49:59 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:01.062 11:49:59 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77689 00:08:01.062 killing process with pid 77689 00:08:01.062 11:49:59 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:01.062 11:49:59 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:01.062 11:49:59 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77689' 00:08:01.062 11:49:59 app_cmdline -- common/autotest_common.sh@965 -- # kill 77689 00:08:01.062 11:49:59 app_cmdline -- common/autotest_common.sh@970 -- # wait 77689 00:08:01.320 00:08:01.320 real 0m1.955s 00:08:01.320 user 0m2.252s 00:08:01.320 sys 0m0.522s 00:08:01.320 11:50:00 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.320 ************************************ 00:08:01.320 END TEST app_cmdline 00:08:01.320 ************************************ 00:08:01.320 11:50:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:01.577 11:50:00 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:01.577 11:50:00 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:08:01.577 11:50:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.577 11:50:00 -- common/autotest_common.sh@10 -- # set +x 00:08:01.577 ************************************ 00:08:01.577 START TEST version 00:08:01.577 ************************************ 00:08:01.577 11:50:00 version -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:01.577 * Looking for test storage... 00:08:01.577 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:01.577 11:50:00 version -- app/version.sh@17 -- # get_header_version major 00:08:01.577 11:50:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.577 11:50:00 version -- app/version.sh@14 -- # cut -f2 00:08:01.577 11:50:00 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.577 11:50:00 version -- app/version.sh@17 -- # major=24 00:08:01.577 11:50:00 version -- app/version.sh@18 -- # get_header_version minor 00:08:01.577 11:50:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.577 11:50:00 version -- app/version.sh@14 -- # cut -f2 00:08:01.577 11:50:00 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.577 11:50:00 version -- app/version.sh@18 -- # minor=5 00:08:01.577 11:50:00 version -- app/version.sh@19 -- # get_header_version patch 00:08:01.577 11:50:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.577 11:50:00 version -- app/version.sh@14 -- # cut -f2 00:08:01.577 11:50:00 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.577 11:50:00 version -- app/version.sh@19 -- # patch=1 00:08:01.577 11:50:00 version -- app/version.sh@20 -- # get_header_version suffix 00:08:01.577 11:50:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:01.577 11:50:00 version -- app/version.sh@14 -- # cut -f2 00:08:01.577 11:50:00 version -- app/version.sh@14 -- # tr -d '"' 00:08:01.577 11:50:00 version -- app/version.sh@20 -- # suffix=-pre 00:08:01.577 11:50:00 version -- app/version.sh@22 -- # version=24.5 00:08:01.577 11:50:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:01.577 11:50:00 version -- app/version.sh@25 -- # version=24.5.1 00:08:01.577 11:50:00 version -- app/version.sh@28 -- # version=24.5.1rc0 00:08:01.577 11:50:00 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:01.577 11:50:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:01.834 11:50:00 version -- app/version.sh@30 -- # py_version=24.5.1rc0 00:08:01.835 11:50:00 version -- app/version.sh@31 -- # [[ 24.5.1rc0 == \2\4\.\5\.\1\r\c\0 ]] 00:08:01.835 00:08:01.835 real 0m0.219s 00:08:01.835 user 0m0.104s 00:08:01.835 sys 0m0.164s 00:08:01.835 ************************************ 00:08:01.835 END TEST version 00:08:01.835 ************************************ 00:08:01.835 11:50:00 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:01.835 11:50:00 version -- common/autotest_common.sh@10 -- # set +x 00:08:01.835 11:50:00 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:08:01.835 11:50:00 -- spdk/autotest.sh@198 -- # uname -s 00:08:01.835 11:50:00 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:08:01.835 11:50:00 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:01.835 11:50:00 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:08:01.835 11:50:00 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:08:01.835 11:50:00 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:01.835 11:50:00 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:01.835 11:50:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:01.835 11:50:00 -- common/autotest_common.sh@10 -- # set +x 00:08:01.835 ************************************ 00:08:01.835 START TEST blockdev_nvme 00:08:01.835 ************************************ 00:08:01.835 11:50:00 blockdev_nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:08:01.835 * Looking for test storage... 00:08:01.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:01.835 11:50:00 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=77834 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:01.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:01.835 11:50:00 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 77834 00:08:01.835 11:50:00 blockdev_nvme -- common/autotest_common.sh@827 -- # '[' -z 77834 ']' 00:08:01.835 11:50:00 blockdev_nvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:01.835 11:50:00 blockdev_nvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:01.835 11:50:00 blockdev_nvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:01.835 11:50:00 blockdev_nvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:01.835 11:50:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.093 [2024-07-21 11:50:00.742333] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:02.093 [2024-07-21 11:50:00.742965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77834 ] 00:08:02.093 [2024-07-21 11:50:00.909300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.093 [2024-07-21 11:50:00.956296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.034 11:50:01 blockdev_nvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:03.034 11:50:01 blockdev_nvme -- common/autotest_common.sh@860 -- # return 0 00:08:03.034 11:50:01 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:03.034 11:50:01 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:08:03.034 11:50:01 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:08:03.034 11:50:01 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:03.034 11:50:01 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:03.034 11:50:01 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:03.034 11:50:01 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.034 11:50:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.293 11:50:01 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.293 11:50:01 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:03.293 11:50:01 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.293 11:50:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.293 11:50:01 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.293 11:50:01 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:08:03.293 11:50:01 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:03.293 11:50:01 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.293 11:50:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.293 11:50:01 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.293 11:50:01 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:03.293 11:50:01 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.293 11:50:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.293 11:50:01 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.293 11:50:01 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:03.293 11:50:01 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.293 11:50:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.293 11:50:02 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.293 11:50:02 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:03.293 11:50:02 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:03.293 11:50:02 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:03.293 11:50:02 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.293 11:50:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.293 11:50:02 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.293 11:50:02 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:03.293 11:50:02 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:03.293 11:50:02 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "32af858c-ffb1-4033-9718-84bdedde1003"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "32af858c-ffb1-4033-9718-84bdedde1003",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "0a158e26-de42-48b2-877d-6fa4d7483b9a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0a158e26-de42-48b2-877d-6fa4d7483b9a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6cbe917f-4382-4d9c-8f4e-304c09cba402"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6cbe917f-4382-4d9c-8f4e-304c09cba402",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "89ca6f2e-62cf-4e0f-aed6-c79a6d4724d8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "89ca6f2e-62cf-4e0f-aed6-c79a6d4724d8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "8ad7eacd-b012-4897-b70c-dadd602342d2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8ad7eacd-b012-4897-b70c-dadd602342d2",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "4201619c-5bb2-4fc4-a9fd-bd73f1a9402c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "4201619c-5bb2-4fc4-a9fd-bd73f1a9402c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:03.293 11:50:02 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:03.293 11:50:02 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:08:03.293 11:50:02 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:03.293 11:50:02 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 77834 00:08:03.293 11:50:02 blockdev_nvme -- common/autotest_common.sh@946 -- # '[' -z 77834 ']' 00:08:03.293 11:50:02 blockdev_nvme -- common/autotest_common.sh@950 -- # kill -0 77834 00:08:03.293 11:50:02 blockdev_nvme -- common/autotest_common.sh@951 -- # uname 00:08:03.293 11:50:02 blockdev_nvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:03.293 11:50:02 blockdev_nvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77834 00:08:03.551 killing process with pid 77834 00:08:03.551 11:50:02 blockdev_nvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:03.551 11:50:02 blockdev_nvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:03.551 11:50:02 blockdev_nvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77834' 00:08:03.551 11:50:02 blockdev_nvme -- common/autotest_common.sh@965 -- # kill 77834 00:08:03.551 11:50:02 blockdev_nvme -- common/autotest_common.sh@970 -- # wait 77834 00:08:03.808 11:50:02 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:03.808 11:50:02 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:03.808 11:50:02 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:03.808 11:50:02 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:03.808 11:50:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.808 ************************************ 00:08:03.808 START TEST bdev_hello_world 00:08:03.808 ************************************ 00:08:03.808 11:50:02 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:03.808 [2024-07-21 11:50:02.653128] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:03.808 [2024-07-21 11:50:02.653321] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77907 ] 00:08:04.064 [2024-07-21 11:50:02.818003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.064 [2024-07-21 11:50:02.868424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.628 [2024-07-21 11:50:03.245829] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:04.628 [2024-07-21 11:50:03.245937] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:04.628 [2024-07-21 11:50:03.245979] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:04.628 [2024-07-21 11:50:03.248228] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:04.628 [2024-07-21 11:50:03.248770] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:04.628 [2024-07-21 11:50:03.248854] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:04.628 [2024-07-21 11:50:03.249059] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:04.628 00:08:04.628 [2024-07-21 11:50:03.249124] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:04.885 00:08:04.885 real 0m0.921s 00:08:04.885 ************************************ 00:08:04.885 END TEST bdev_hello_world 00:08:04.885 ************************************ 00:08:04.885 user 0m0.592s 00:08:04.885 sys 0m0.225s 00:08:04.885 11:50:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:04.885 11:50:03 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:04.885 11:50:03 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:04.885 11:50:03 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:04.885 11:50:03 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:04.885 11:50:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.885 ************************************ 00:08:04.885 START TEST bdev_bounds 00:08:04.885 ************************************ 00:08:04.885 11:50:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:08:04.885 11:50:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=77938 00:08:04.885 11:50:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:04.885 11:50:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:04.885 11:50:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 77938' 00:08:04.885 Process bdevio pid: 77938 00:08:04.885 11:50:03 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 77938 00:08:04.885 11:50:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 77938 ']' 00:08:04.885 11:50:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.885 11:50:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:04.885 11:50:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.885 11:50:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:04.885 11:50:03 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:04.885 [2024-07-21 11:50:03.646972] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:04.886 [2024-07-21 11:50:03.647234] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77938 ] 00:08:05.142 [2024-07-21 11:50:03.813980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:05.142 [2024-07-21 11:50:03.863073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:05.142 [2024-07-21 11:50:03.863179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.142 [2024-07-21 11:50:03.863282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:05.710 11:50:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:05.710 11:50:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:08:05.710 11:50:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:05.968 I/O targets: 00:08:05.968 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:05.968 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:05.968 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:05.968 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:05.968 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:05.968 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:05.968 00:08:05.968 00:08:05.968 CUnit - A unit testing framework for C - Version 2.1-3 00:08:05.968 http://cunit.sourceforge.net/ 00:08:05.968 00:08:05.968 00:08:05.968 Suite: bdevio tests on: Nvme3n1 00:08:05.968 Test: blockdev write read block ...passed 00:08:05.968 Test: blockdev write zeroes read block ...passed 00:08:05.968 Test: blockdev write zeroes read no split ...passed 00:08:05.968 Test: blockdev write zeroes read split ...passed 00:08:05.968 Test: blockdev write zeroes read split partial ...passed 00:08:05.968 Test: blockdev reset ...[2024-07-21 11:50:04.632062] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:05.968 passed 00:08:05.968 Test: blockdev write read 8 blocks ...[2024-07-21 11:50:04.634042] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:05.968 passed 00:08:05.968 Test: blockdev write read size > 128k ...passed 00:08:05.968 Test: blockdev write read invalid size ...passed 00:08:05.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:05.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:05.968 Test: blockdev write read max offset ...passed 00:08:05.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:05.968 Test: blockdev writev readv 8 blocks ...passed 00:08:05.968 Test: blockdev writev readv 30 x 1block ...passed 00:08:05.968 Test: blockdev writev readv block ...passed 00:08:05.968 Test: blockdev writev readv size > 128k ...passed 00:08:05.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:05.968 Test: blockdev comparev and writev ...[2024-07-21 11:50:04.640672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9a0e000 len:0x1000 00:08:05.968 [2024-07-21 11:50:04.640731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:05.968 passed 00:08:05.968 Test: blockdev nvme passthru rw ...passed 00:08:05.968 Test: blockdev nvme passthru vendor specific ...[2024-07-21 11:50:04.641485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:05.968 [2024-07-21 11:50:04.641525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:05.968 passed 00:08:05.968 Test: blockdev nvme admin passthru ...passed 00:08:05.968 Test: blockdev copy ...passed 00:08:05.968 Suite: bdevio tests on: Nvme2n3 00:08:05.968 Test: blockdev write read block ...passed 00:08:05.968 Test: blockdev write zeroes read block ...passed 00:08:05.968 Test: blockdev write zeroes read no split ...passed 00:08:05.968 Test: blockdev write zeroes read split ...passed 00:08:05.968 Test: blockdev write zeroes read split partial ...passed 00:08:05.968 Test: blockdev reset ...[2024-07-21 11:50:04.660795] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:05.968 [2024-07-21 11:50:04.663048] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:05.968 passed 00:08:05.968 Test: blockdev write read 8 blocks ...passed 00:08:05.968 Test: blockdev write read size > 128k ...passed 00:08:05.968 Test: blockdev write read invalid size ...passed 00:08:05.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:05.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:05.968 Test: blockdev write read max offset ...passed 00:08:05.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:05.968 Test: blockdev writev readv 8 blocks ...passed 00:08:05.968 Test: blockdev writev readv 30 x 1block ...passed 00:08:05.968 Test: blockdev writev readv block ...passed 00:08:05.968 Test: blockdev writev readv size > 128k ...passed 00:08:05.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:05.968 Test: blockdev comparev and writev ...[2024-07-21 11:50:04.669199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9a08000 len:0x1000 00:08:05.968 [2024-07-21 11:50:04.669250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:05.968 passed 00:08:05.968 Test: blockdev nvme passthru rw ...passed 00:08:05.968 Test: blockdev nvme passthru vendor specific ...passed 00:08:05.968 Test: blockdev nvme admin passthru ...[2024-07-21 11:50:04.669910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:05.968 [2024-07-21 11:50:04.669948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:05.968 passed 00:08:05.968 Test: blockdev copy ...passed 00:08:05.968 Suite: bdevio tests on: Nvme2n2 00:08:05.968 Test: blockdev write read block ...passed 00:08:05.968 Test: blockdev write zeroes read block ...passed 00:08:05.968 Test: blockdev write zeroes read no split ...passed 00:08:05.968 Test: blockdev write zeroes read split ...passed 00:08:05.968 Test: blockdev write zeroes read split partial ...passed 00:08:05.968 Test: blockdev reset ...[2024-07-21 11:50:04.688973] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:05.968 [2024-07-21 11:50:04.691173] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:05.968 passed 00:08:05.968 Test: blockdev write read 8 blocks ...passed 00:08:05.968 Test: blockdev write read size > 128k ...passed 00:08:05.968 Test: blockdev write read invalid size ...passed 00:08:05.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:05.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:05.968 Test: blockdev write read max offset ...passed 00:08:05.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:05.968 Test: blockdev writev readv 8 blocks ...passed 00:08:05.968 Test: blockdev writev readv 30 x 1block ...passed 00:08:05.968 Test: blockdev writev readv block ...passed 00:08:05.968 Test: blockdev writev readv size > 128k ...passed 00:08:05.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:05.968 Test: blockdev comparev and writev ...[2024-07-21 11:50:04.696981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9a04000 len:0x1000 00:08:05.968 [2024-07-21 11:50:04.697033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:05.968 passed 00:08:05.968 Test: blockdev nvme passthru rw ...passed 00:08:05.968 Test: blockdev nvme passthru vendor specific ...passed 00:08:05.968 Test: blockdev nvme admin passthru ...[2024-07-21 11:50:04.697633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:05.968 [2024-07-21 11:50:04.697670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:05.968 passed 00:08:05.968 Test: blockdev copy ...passed 00:08:05.968 Suite: bdevio tests on: Nvme2n1 00:08:05.968 Test: blockdev write read block ...passed 00:08:05.968 Test: blockdev write zeroes read block ...passed 00:08:05.968 Test: blockdev write zeroes read no split ...passed 00:08:05.968 Test: blockdev write zeroes read split ...passed 00:08:05.968 Test: blockdev write zeroes read split partial ...passed 00:08:05.968 Test: blockdev reset ...[2024-07-21 11:50:04.715570] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:05.968 passed 00:08:05.968 Test: blockdev write read 8 blocks ...[2024-07-21 11:50:04.717739] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:05.968 passed 00:08:05.968 Test: blockdev write read size > 128k ...passed 00:08:05.968 Test: blockdev write read invalid size ...passed 00:08:05.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:05.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:05.968 Test: blockdev write read max offset ...passed 00:08:05.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:05.968 Test: blockdev writev readv 8 blocks ...passed 00:08:05.968 Test: blockdev writev readv 30 x 1block ...passed 00:08:05.968 Test: blockdev writev readv block ...passed 00:08:05.968 Test: blockdev writev readv size > 128k ...passed 00:08:05.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:05.968 Test: blockdev comparev and writev ...[2024-07-21 11:50:04.723349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9a04000 len:0x1000 00:08:05.968 [2024-07-21 11:50:04.723404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:05.968 passed 00:08:05.968 Test: blockdev nvme passthru rw ...passed 00:08:05.968 Test: blockdev nvme passthru vendor specific ...passed 00:08:05.968 Test: blockdev nvme admin passthru ...[2024-07-21 11:50:04.723985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:05.968 [2024-07-21 11:50:04.724027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:05.968 passed 00:08:05.968 Test: blockdev copy ...passed 00:08:05.968 Suite: bdevio tests on: Nvme1n1 00:08:05.968 Test: blockdev write read block ...passed 00:08:05.968 Test: blockdev write zeroes read block ...passed 00:08:05.968 Test: blockdev write zeroes read no split ...passed 00:08:05.968 Test: blockdev write zeroes read split ...passed 00:08:05.968 Test: blockdev write zeroes read split partial ...passed 00:08:05.968 Test: blockdev reset ...[2024-07-21 11:50:04.742109] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:05.968 [2024-07-21 11:50:04.744044] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:05.968 passed 00:08:05.968 Test: blockdev write read 8 blocks ...passed 00:08:05.968 Test: blockdev write read size > 128k ...passed 00:08:05.968 Test: blockdev write read invalid size ...passed 00:08:05.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:05.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:05.968 Test: blockdev write read max offset ...passed 00:08:05.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:05.968 Test: blockdev writev readv 8 blocks ...passed 00:08:05.968 Test: blockdev writev readv 30 x 1block ...passed 00:08:05.968 Test: blockdev writev readv block ...passed 00:08:05.968 Test: blockdev writev readv size > 128k ...passed 00:08:05.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:05.968 Test: blockdev comparev and writev ...[2024-07-21 11:50:04.750587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c580e000 len:0x1000 00:08:05.968 [2024-07-21 11:50:04.750714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:05.968 passed 00:08:05.968 Test: blockdev nvme passthru rw ...passed 00:08:05.968 Test: blockdev nvme passthru vendor specific ...passed 00:08:05.968 Test: blockdev nvme admin passthru ...[2024-07-21 11:50:04.751486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:05.968 [2024-07-21 11:50:04.751530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:05.968 passed 00:08:05.968 Test: blockdev copy ...passed 00:08:05.968 Suite: bdevio tests on: Nvme0n1 00:08:05.968 Test: blockdev write read block ...passed 00:08:05.968 Test: blockdev write zeroes read block ...passed 00:08:05.968 Test: blockdev write zeroes read no split ...passed 00:08:05.968 Test: blockdev write zeroes read split ...passed 00:08:05.968 Test: blockdev write zeroes read split partial ...passed 00:08:05.968 Test: blockdev reset ...[2024-07-21 11:50:04.771190] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:05.968 passed 00:08:05.968 Test: blockdev write read 8 blocks ...[2024-07-21 11:50:04.772972] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:05.968 passed 00:08:05.968 Test: blockdev write read size > 128k ...passed 00:08:05.968 Test: blockdev write read invalid size ...passed 00:08:05.968 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:05.968 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:05.968 Test: blockdev write read max offset ...passed 00:08:05.968 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:05.968 Test: blockdev writev readv 8 blocks ...passed 00:08:05.968 Test: blockdev writev readv 30 x 1block ...passed 00:08:05.968 Test: blockdev writev readv block ...passed 00:08:05.968 Test: blockdev writev readv size > 128k ...passed 00:08:05.968 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:05.968 Test: blockdev comparev and writev ...passed 00:08:05.968 Test: blockdev nvme passthru rw ...[2024-07-21 11:50:04.777360] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:05.968 separate metadata which is not supported yet. 00:08:05.968 passed 00:08:05.968 Test: blockdev nvme passthru vendor specific ...[2024-07-21 11:50:04.777801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 Ppassed 00:08:05.968 Test: blockdev nvme admin passthru ...RP2 0x0 00:08:05.968 [2024-07-21 11:50:04.777918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:05.968 passed 00:08:05.968 Test: blockdev copy ...passed 00:08:05.969 00:08:05.969 Run Summary: Type Total Ran Passed Failed Inactive 00:08:05.969 suites 6 6 n/a 0 0 00:08:05.969 tests 138 138 138 0 0 00:08:05.969 asserts 893 893 893 0 n/a 00:08:05.969 00:08:05.969 Elapsed time = 0.391 seconds 00:08:05.969 0 00:08:05.969 11:50:04 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 77938 00:08:05.969 11:50:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 77938 ']' 00:08:05.969 11:50:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 77938 00:08:05.969 11:50:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:08:05.969 11:50:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:05.969 11:50:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77938 00:08:06.225 11:50:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:06.225 11:50:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:06.225 11:50:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77938' 00:08:06.225 killing process with pid 77938 00:08:06.225 11:50:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 77938 00:08:06.225 11:50:04 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 77938 00:08:06.225 11:50:05 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:06.225 00:08:06.225 real 0m1.487s 00:08:06.225 user 0m3.620s 00:08:06.225 sys 0m0.366s 00:08:06.225 11:50:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:06.225 11:50:05 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:06.225 ************************************ 00:08:06.225 END TEST bdev_bounds 00:08:06.225 ************************************ 00:08:06.483 11:50:05 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:06.483 11:50:05 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:06.483 11:50:05 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:06.483 11:50:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.483 ************************************ 00:08:06.483 START TEST bdev_nbd 00:08:06.483 ************************************ 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=77986 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 77986 /var/tmp/spdk-nbd.sock 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 77986 ']' 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:06.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:06.483 11:50:05 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:06.483 [2024-07-21 11:50:05.192425] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:06.483 [2024-07-21 11:50:05.192637] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.739 [2024-07-21 11:50:05.362197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.739 [2024-07-21 11:50:05.414335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:07.304 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.562 1+0 records in 00:08:07.562 1+0 records out 00:08:07.562 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000777612 s, 5.3 MB/s 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:07.562 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:07.820 1+0 records in 00:08:07.820 1+0 records out 00:08:07.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676581 s, 6.1 MB/s 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:07.820 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.079 1+0 records in 00:08:08.079 1+0 records out 00:08:08.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000878691 s, 4.7 MB/s 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:08.079 11:50:06 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:08.339 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:08.339 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:08.339 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:08.339 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:08:08.339 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:08.339 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:08.339 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:08.339 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:08:08.339 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:08.339 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:08.339 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:08.339 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.339 1+0 records in 00:08:08.339 1+0 records out 00:08:08.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00100966 s, 4.1 MB/s 00:08:08.340 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.340 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:08.340 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.340 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:08.340 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:08.340 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:08.340 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:08.340 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.599 1+0 records in 00:08:08.599 1+0 records out 00:08:08.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761601 s, 5.4 MB/s 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:08.599 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:08.859 1+0 records in 00:08:08.859 1+0 records out 00:08:08.859 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000847052 s, 4.8 MB/s 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:08:08.859 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:09.119 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:09.119 { 00:08:09.119 "nbd_device": "/dev/nbd0", 00:08:09.119 "bdev_name": "Nvme0n1" 00:08:09.119 }, 00:08:09.119 { 00:08:09.119 "nbd_device": "/dev/nbd1", 00:08:09.119 "bdev_name": "Nvme1n1" 00:08:09.119 }, 00:08:09.119 { 00:08:09.119 "nbd_device": "/dev/nbd2", 00:08:09.119 "bdev_name": "Nvme2n1" 00:08:09.119 }, 00:08:09.119 { 00:08:09.119 "nbd_device": "/dev/nbd3", 00:08:09.119 "bdev_name": "Nvme2n2" 00:08:09.119 }, 00:08:09.119 { 00:08:09.119 "nbd_device": "/dev/nbd4", 00:08:09.119 "bdev_name": "Nvme2n3" 00:08:09.119 }, 00:08:09.119 { 00:08:09.119 "nbd_device": "/dev/nbd5", 00:08:09.119 "bdev_name": "Nvme3n1" 00:08:09.119 } 00:08:09.119 ]' 00:08:09.119 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:09.119 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:09.119 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:09.119 { 00:08:09.119 "nbd_device": "/dev/nbd0", 00:08:09.119 "bdev_name": "Nvme0n1" 00:08:09.119 }, 00:08:09.119 { 00:08:09.119 "nbd_device": "/dev/nbd1", 00:08:09.119 "bdev_name": "Nvme1n1" 00:08:09.119 }, 00:08:09.119 { 00:08:09.119 "nbd_device": "/dev/nbd2", 00:08:09.119 "bdev_name": "Nvme2n1" 00:08:09.119 }, 00:08:09.119 { 00:08:09.119 "nbd_device": "/dev/nbd3", 00:08:09.119 "bdev_name": "Nvme2n2" 00:08:09.119 }, 00:08:09.119 { 00:08:09.119 "nbd_device": "/dev/nbd4", 00:08:09.119 "bdev_name": "Nvme2n3" 00:08:09.119 }, 00:08:09.119 { 00:08:09.119 "nbd_device": "/dev/nbd5", 00:08:09.119 "bdev_name": "Nvme3n1" 00:08:09.119 } 00:08:09.119 ]' 00:08:09.119 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:08:09.119 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.120 11:50:07 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:09.382 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:09.382 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:09.382 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:09.382 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.382 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.382 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:09.382 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.382 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.382 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.382 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:09.655 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:09.655 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:09.655 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:09.655 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.655 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.655 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:09.655 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.655 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.655 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.655 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:09.936 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:09.936 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:09.936 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:09.936 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:09.936 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:09.936 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:09.936 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:09.936 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:09.936 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:09.936 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:10.195 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:10.195 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:10.195 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:10.195 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.195 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.195 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:10.195 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.195 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.195 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:10.195 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:10.195 11:50:08 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:10.195 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:10.195 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:10.195 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:10.195 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:10.195 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:10.195 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:10.195 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:10.195 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:10.195 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.195 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:10.455 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:10.714 /dev/nbd0 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.714 1+0 records in 00:08:10.714 1+0 records out 00:08:10.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000715068 s, 5.7 MB/s 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.714 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:10.715 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:10.715 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:10.715 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:10.715 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:08:10.973 /dev/nbd1 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:10.973 1+0 records in 00:08:10.973 1+0 records out 00:08:10.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000627484 s, 6.5 MB/s 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:10.973 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:08:11.256 /dev/nbd10 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:11.256 1+0 records in 00:08:11.256 1+0 records out 00:08:11.256 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569738 s, 7.2 MB/s 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:11.256 11:50:09 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:08:11.515 /dev/nbd11 00:08:11.515 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:11.515 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:11.515 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:08:11.515 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:11.515 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:11.515 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:11.515 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:08:11.515 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:11.515 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:11.515 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:11.515 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:11.515 1+0 records in 00:08:11.515 1+0 records out 00:08:11.515 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00073931 s, 5.5 MB/s 00:08:11.515 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.515 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:11.516 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.516 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:11.516 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:11.516 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.516 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:11.516 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:08:11.774 /dev/nbd12 00:08:11.774 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:11.774 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:11.774 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:11.775 1+0 records in 00:08:11.775 1+0 records out 00:08:11.775 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000489071 s, 8.4 MB/s 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:11.775 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:08:12.105 /dev/nbd13 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:12.105 1+0 records in 00:08:12.105 1+0 records out 00:08:12.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000632943 s, 6.5 MB/s 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:12.105 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:12.106 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:08:12.106 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:12.106 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.106 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:12.106 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:12.106 { 00:08:12.106 "nbd_device": "/dev/nbd0", 00:08:12.106 "bdev_name": "Nvme0n1" 00:08:12.106 }, 00:08:12.106 { 00:08:12.106 "nbd_device": "/dev/nbd1", 00:08:12.106 "bdev_name": "Nvme1n1" 00:08:12.106 }, 00:08:12.106 { 00:08:12.106 "nbd_device": "/dev/nbd10", 00:08:12.106 "bdev_name": "Nvme2n1" 00:08:12.106 }, 00:08:12.106 { 00:08:12.106 "nbd_device": "/dev/nbd11", 00:08:12.106 "bdev_name": "Nvme2n2" 00:08:12.106 }, 00:08:12.106 { 00:08:12.106 "nbd_device": "/dev/nbd12", 00:08:12.106 "bdev_name": "Nvme2n3" 00:08:12.106 }, 00:08:12.106 { 00:08:12.106 "nbd_device": "/dev/nbd13", 00:08:12.106 "bdev_name": "Nvme3n1" 00:08:12.106 } 00:08:12.106 ]' 00:08:12.106 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:12.106 { 00:08:12.106 "nbd_device": "/dev/nbd0", 00:08:12.106 "bdev_name": "Nvme0n1" 00:08:12.106 }, 00:08:12.106 { 00:08:12.106 "nbd_device": "/dev/nbd1", 00:08:12.106 "bdev_name": "Nvme1n1" 00:08:12.106 }, 00:08:12.106 { 00:08:12.106 "nbd_device": "/dev/nbd10", 00:08:12.106 "bdev_name": "Nvme2n1" 00:08:12.106 }, 00:08:12.106 { 00:08:12.106 "nbd_device": "/dev/nbd11", 00:08:12.106 "bdev_name": "Nvme2n2" 00:08:12.106 }, 00:08:12.106 { 00:08:12.106 "nbd_device": "/dev/nbd12", 00:08:12.106 "bdev_name": "Nvme2n3" 00:08:12.106 }, 00:08:12.106 { 00:08:12.106 "nbd_device": "/dev/nbd13", 00:08:12.106 "bdev_name": "Nvme3n1" 00:08:12.106 } 00:08:12.106 ]' 00:08:12.106 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:12.463 /dev/nbd1 00:08:12.463 /dev/nbd10 00:08:12.463 /dev/nbd11 00:08:12.463 /dev/nbd12 00:08:12.463 /dev/nbd13' 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:12.463 /dev/nbd1 00:08:12.463 /dev/nbd10 00:08:12.463 /dev/nbd11 00:08:12.463 /dev/nbd12 00:08:12.463 /dev/nbd13' 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:12.463 256+0 records in 00:08:12.463 256+0 records out 00:08:12.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137428 s, 76.3 MB/s 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.463 11:50:10 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:12.463 256+0 records in 00:08:12.463 256+0 records out 00:08:12.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.063594 s, 16.5 MB/s 00:08:12.463 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.463 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:12.463 256+0 records in 00:08:12.463 256+0 records out 00:08:12.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0831472 s, 12.6 MB/s 00:08:12.463 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.464 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:12.464 256+0 records in 00:08:12.464 256+0 records out 00:08:12.464 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0816376 s, 12.8 MB/s 00:08:12.464 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.464 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:12.746 256+0 records in 00:08:12.747 256+0 records out 00:08:12.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.079679 s, 13.2 MB/s 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:12.747 256+0 records in 00:08:12.747 256+0 records out 00:08:12.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0806936 s, 13.0 MB/s 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:12.747 256+0 records in 00:08:12.747 256+0 records out 00:08:12.747 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0811911 s, 12.9 MB/s 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:12.747 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:13.052 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:13.052 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:13.052 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:13.052 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.052 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.052 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:13.052 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.052 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.052 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.052 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:13.383 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:13.383 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:13.383 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:13.383 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.383 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.383 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:13.383 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.383 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.383 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.383 11:50:11 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:13.383 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:13.383 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:13.383 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:13.383 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.383 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.383 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:13.383 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.383 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.383 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.383 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:13.642 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:13.642 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:13.642 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:13.642 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.642 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.642 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:13.642 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.642 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.642 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.642 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:13.944 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:13.944 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:13.944 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:13.944 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:13.944 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:13.944 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:13.944 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:13.944 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:13.944 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:13.945 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:13.945 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:13.945 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:13.945 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:14.229 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:14.229 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:14.229 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:14.229 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:14.229 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:14.229 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:14.229 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.229 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:14.229 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:14.229 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:14.229 11:50:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:14.229 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:14.487 malloc_lvol_verify 00:08:14.487 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:14.744 23f57ee6-3d3a-4779-a8b4-934d9d7307cf 00:08:14.744 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:15.002 6ce25164-b895-4d92-b532-0e79393d7991 00:08:15.002 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:15.002 /dev/nbd0 00:08:15.002 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:15.002 mke2fs 1.46.5 (30-Dec-2021) 00:08:15.002 Discarding device blocks: 0/4096 done 00:08:15.260 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:15.260 00:08:15.260 Allocating group tables: 0/1 done 00:08:15.260 Writing inode tables: 0/1 done 00:08:15.260 Creating journal (1024 blocks): done 00:08:15.260 Writing superblocks and filesystem accounting information: 0/1 done 00:08:15.260 00:08:15.260 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:15.261 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:15.261 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:15.261 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:15.261 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:15.261 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:15.261 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:15.261 11:50:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 77986 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 77986 ']' 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 77986 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 77986 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:15.261 killing process with pid 77986 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 77986' 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 77986 00:08:15.261 11:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 77986 00:08:15.547 11:50:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:15.547 00:08:15.547 real 0m9.290s 00:08:15.547 user 0m13.240s 00:08:15.547 sys 0m3.499s 00:08:15.547 11:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:15.547 11:50:14 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:15.547 ************************************ 00:08:15.547 END TEST bdev_nbd 00:08:15.547 ************************************ 00:08:15.805 11:50:14 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:15.805 11:50:14 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:08:15.805 skipping fio tests on NVMe due to multi-ns failures. 00:08:15.805 11:50:14 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:15.805 11:50:14 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:15.805 11:50:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:15.805 11:50:14 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:15.805 11:50:14 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:15.805 11:50:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:15.805 ************************************ 00:08:15.805 START TEST bdev_verify 00:08:15.805 ************************************ 00:08:15.805 11:50:14 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:15.805 [2024-07-21 11:50:14.538125] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:15.805 [2024-07-21 11:50:14.538257] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78356 ] 00:08:16.063 [2024-07-21 11:50:14.687550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:16.063 [2024-07-21 11:50:14.734260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.063 [2024-07-21 11:50:14.734376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.321 Running I/O for 5 seconds... 00:08:21.584 00:08:21.584 Latency(us) 00:08:21.584 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.584 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:21.584 Verification LBA range: start 0x0 length 0xbd0bd 00:08:21.584 Nvme0n1 : 5.08 1714.44 6.70 0.00 0.00 74493.96 14881.54 74178.74 00:08:21.584 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:21.584 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:21.584 Nvme0n1 : 5.08 1687.36 6.59 0.00 0.00 75720.45 9787.47 132789.10 00:08:21.584 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:21.584 Verification LBA range: start 0x0 length 0xa0000 00:08:21.584 Nvme1n1 : 5.08 1713.78 6.69 0.00 0.00 74436.29 16255.22 76926.10 00:08:21.584 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:21.584 Verification LBA range: start 0xa0000 length 0xa0000 00:08:21.584 Nvme1n1 : 5.08 1686.81 6.59 0.00 0.00 75578.57 10245.37 130041.74 00:08:21.585 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:21.585 Verification LBA range: start 0x0 length 0x80000 00:08:21.585 Nvme2n1 : 5.08 1713.29 6.69 0.00 0.00 74332.79 16598.64 78299.78 00:08:21.585 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:21.585 Verification LBA range: start 0x80000 length 0x80000 00:08:21.585 Nvme2n1 : 5.07 1677.45 6.55 0.00 0.00 75748.96 7784.19 128210.17 00:08:21.585 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:21.585 Verification LBA range: start 0x0 length 0x80000 00:08:21.585 Nvme2n2 : 5.08 1712.75 6.69 0.00 0.00 74229.08 15224.96 78757.67 00:08:21.585 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:21.585 Verification LBA range: start 0x80000 length 0x80000 00:08:21.585 Nvme2n2 : 5.08 1676.88 6.55 0.00 0.00 75648.43 8127.61 117220.72 00:08:21.585 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:21.585 Verification LBA range: start 0x0 length 0x80000 00:08:21.585 Nvme2n3 : 5.08 1712.31 6.69 0.00 0.00 74105.95 14251.93 79215.57 00:08:21.585 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:21.585 Verification LBA range: start 0x80000 length 0x80000 00:08:21.585 Nvme2n3 : 5.08 1676.45 6.55 0.00 0.00 75533.21 7555.24 128210.17 00:08:21.585 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:21.585 Verification LBA range: start 0x0 length 0x20000 00:08:21.585 Nvme3n1 : 5.08 1711.72 6.69 0.00 0.00 73991.83 10130.89 78299.78 00:08:21.585 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:21.585 Verification LBA range: start 0x20000 length 0x20000 00:08:21.585 Nvme3n1 : 5.09 1685.61 6.58 0.00 0.00 75115.27 7555.24 130957.53 00:08:21.585 =================================================================================================================== 00:08:21.585 Total : 20368.85 79.57 0.00 0.00 74905.08 7555.24 132789.10 00:08:22.151 00:08:22.151 real 0m6.445s 00:08:22.151 user 0m12.080s 00:08:22.151 sys 0m0.231s 00:08:22.151 11:50:20 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:22.151 11:50:20 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:22.151 ************************************ 00:08:22.151 END TEST bdev_verify 00:08:22.151 ************************************ 00:08:22.151 11:50:20 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:22.151 11:50:20 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:22.151 11:50:20 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:22.151 11:50:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:22.151 ************************************ 00:08:22.151 START TEST bdev_verify_big_io 00:08:22.151 ************************************ 00:08:22.151 11:50:20 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:22.411 [2024-07-21 11:50:21.029624] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:22.411 [2024-07-21 11:50:21.029745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78443 ] 00:08:22.411 [2024-07-21 11:50:21.193924] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:22.411 [2024-07-21 11:50:21.240538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.411 [2024-07-21 11:50:21.240634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.978 Running I/O for 5 seconds... 00:08:29.555 00:08:29.555 Latency(us) 00:08:29.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.555 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.555 Verification LBA range: start 0x0 length 0xbd0b 00:08:29.555 Nvme0n1 : 5.57 160.79 10.05 0.00 0.00 775190.10 35028.85 802229.32 00:08:29.555 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.555 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:29.555 Nvme0n1 : 5.57 157.50 9.84 0.00 0.00 782288.27 23924.93 805892.47 00:08:29.555 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.555 Verification LBA range: start 0x0 length 0xa000 00:08:29.555 Nvme1n1 : 5.58 160.71 10.04 0.00 0.00 756279.77 88373.44 692334.90 00:08:29.555 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.555 Verification LBA range: start 0xa000 length 0xa000 00:08:29.555 Nvme1n1 : 5.58 160.56 10.03 0.00 0.00 753468.53 85626.08 706987.49 00:08:29.555 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.555 Verification LBA range: start 0x0 length 0x8000 00:08:29.555 Nvme2n1 : 5.58 160.55 10.03 0.00 0.00 738498.32 124547.02 703324.34 00:08:29.555 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.555 Verification LBA range: start 0x8000 length 0x8000 00:08:29.555 Nvme2n1 : 5.58 160.46 10.03 0.00 0.00 735032.44 119052.30 717976.93 00:08:29.555 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.555 Verification LBA range: start 0x0 length 0x8000 00:08:29.555 Nvme2n2 : 5.64 170.08 10.63 0.00 0.00 687387.20 16598.64 714313.78 00:08:29.555 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.555 Verification LBA range: start 0x8000 length 0x8000 00:08:29.555 Nvme2n2 : 5.65 169.98 10.62 0.00 0.00 684249.62 15110.48 728966.37 00:08:29.555 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.555 Verification LBA range: start 0x0 length 0x8000 00:08:29.555 Nvme2n3 : 5.69 176.24 11.02 0.00 0.00 647059.54 16484.16 725303.22 00:08:29.555 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.555 Verification LBA range: start 0x8000 length 0x8000 00:08:29.555 Nvme2n3 : 5.69 176.36 11.02 0.00 0.00 642952.25 13565.09 747282.11 00:08:29.555 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:29.555 Verification LBA range: start 0x0 length 0x2000 00:08:29.555 Nvme3n1 : 5.70 191.00 11.94 0.00 0.00 584682.39 1795.80 743618.96 00:08:29.555 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:29.555 Verification LBA range: start 0x2000 length 0x2000 00:08:29.555 Nvme3n1 : 5.71 198.27 12.39 0.00 0.00 560603.83 203.91 758271.55 00:08:29.555 =================================================================================================================== 00:08:29.555 Total : 2042.50 127.66 0.00 0.00 689824.60 203.91 805892.47 00:08:29.815 00:08:29.815 real 0m7.569s 00:08:29.815 user 0m14.255s 00:08:29.815 sys 0m0.292s 00:08:29.815 11:50:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:29.815 11:50:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:29.815 ************************************ 00:08:29.815 END TEST bdev_verify_big_io 00:08:29.815 ************************************ 00:08:29.815 11:50:28 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:29.815 11:50:28 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:29.815 11:50:28 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:29.815 11:50:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.815 ************************************ 00:08:29.815 START TEST bdev_write_zeroes 00:08:29.815 ************************************ 00:08:29.815 11:50:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:29.815 [2024-07-21 11:50:28.666216] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:29.815 [2024-07-21 11:50:28.666315] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78545 ] 00:08:30.074 [2024-07-21 11:50:28.825002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.074 [2024-07-21 11:50:28.870344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.640 Running I/O for 1 seconds... 00:08:31.570 00:08:31.571 Latency(us) 00:08:31.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:31.571 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:31.571 Nvme0n1 : 1.01 10755.98 42.02 0.00 0.00 11865.23 8814.45 18773.63 00:08:31.571 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:31.571 Nvme1n1 : 1.01 10743.51 41.97 0.00 0.00 11864.40 9329.58 19231.52 00:08:31.571 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:31.571 Nvme2n1 : 1.02 10769.47 42.07 0.00 0.00 11811.07 7555.24 17171.00 00:08:31.571 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:31.571 Nvme2n2 : 1.02 10757.13 42.02 0.00 0.00 11804.58 7841.43 16827.58 00:08:31.571 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:31.571 Nvme2n3 : 1.02 10792.36 42.16 0.00 0.00 11725.11 4893.74 17171.00 00:08:31.571 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:31.571 Nvme3n1 : 1.02 10782.86 42.12 0.00 0.00 11717.81 5065.45 16942.06 00:08:31.571 =================================================================================================================== 00:08:31.571 Total : 64601.32 252.35 0.00 0.00 11797.75 4893.74 19231.52 00:08:31.828 00:08:31.828 real 0m1.924s 00:08:31.828 user 0m1.606s 00:08:31.828 sys 0m0.208s 00:08:31.828 11:50:30 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:31.828 11:50:30 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:31.828 ************************************ 00:08:31.828 END TEST bdev_write_zeroes 00:08:31.829 ************************************ 00:08:31.829 11:50:30 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:31.829 11:50:30 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:31.829 11:50:30 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:31.829 11:50:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:31.829 ************************************ 00:08:31.829 START TEST bdev_json_nonenclosed 00:08:31.829 ************************************ 00:08:31.829 11:50:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:31.829 [2024-07-21 11:50:30.664631] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:31.829 [2024-07-21 11:50:30.664779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78583 ] 00:08:32.088 [2024-07-21 11:50:30.837351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.088 [2024-07-21 11:50:30.882634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.088 [2024-07-21 11:50:30.882732] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:32.088 [2024-07-21 11:50:30.882755] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:32.088 [2024-07-21 11:50:30.882765] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.347 00:08:32.347 real 0m0.436s 00:08:32.347 user 0m0.201s 00:08:32.347 sys 0m0.130s 00:08:32.347 11:50:31 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.347 ************************************ 00:08:32.347 END TEST bdev_json_nonenclosed 00:08:32.347 ************************************ 00:08:32.347 11:50:31 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:32.347 11:50:31 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:32.347 11:50:31 blockdev_nvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:08:32.347 11:50:31 blockdev_nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:32.347 11:50:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.347 ************************************ 00:08:32.347 START TEST bdev_json_nonarray 00:08:32.347 ************************************ 00:08:32.347 11:50:31 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:32.347 [2024-07-21 11:50:31.148937] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:32.347 [2024-07-21 11:50:31.149055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78614 ] 00:08:32.606 [2024-07-21 11:50:31.312212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.606 [2024-07-21 11:50:31.362097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.606 [2024-07-21 11:50:31.362203] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:32.606 [2024-07-21 11:50:31.362235] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:32.606 [2024-07-21 11:50:31.362245] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.865 00:08:32.865 real 0m0.410s 00:08:32.865 user 0m0.181s 00:08:32.865 sys 0m0.124s 00:08:32.865 11:50:31 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.865 11:50:31 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:32.865 ************************************ 00:08:32.865 END TEST bdev_json_nonarray 00:08:32.865 ************************************ 00:08:32.865 11:50:31 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:08:32.865 11:50:31 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:08:32.865 11:50:31 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:08:32.865 11:50:31 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:08:32.865 11:50:31 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:08:32.865 11:50:31 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:32.865 11:50:31 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:32.865 11:50:31 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:32.865 11:50:31 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:32.865 11:50:31 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:32.865 11:50:31 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:32.865 00:08:32.865 real 0m31.021s 00:08:32.865 user 0m47.970s 00:08:32.865 sys 0m6.071s 00:08:32.865 11:50:31 blockdev_nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:32.865 11:50:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:32.865 ************************************ 00:08:32.865 END TEST blockdev_nvme 00:08:32.865 ************************************ 00:08:32.865 11:50:31 -- spdk/autotest.sh@213 -- # uname -s 00:08:32.865 11:50:31 -- spdk/autotest.sh@213 -- # [[ Linux == Linux ]] 00:08:32.865 11:50:31 -- spdk/autotest.sh@214 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:32.865 11:50:31 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:32.865 11:50:31 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:32.865 11:50:31 -- common/autotest_common.sh@10 -- # set +x 00:08:32.865 ************************************ 00:08:32.865 START TEST blockdev_nvme_gpt 00:08:32.865 ************************************ 00:08:32.866 11:50:31 blockdev_nvme_gpt -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:32.866 * Looking for test storage... 00:08:32.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:32.866 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # uname -s 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # test_type=gpt 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # crypto_device= 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # dek= 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # env_ctx= 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == bdev ]] 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@690 -- # [[ gpt == crypto_* ]] 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=78683 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:33.125 11:50:31 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 78683 00:08:33.125 11:50:31 blockdev_nvme_gpt -- common/autotest_common.sh@827 -- # '[' -z 78683 ']' 00:08:33.125 11:50:31 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.125 11:50:31 blockdev_nvme_gpt -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:33.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.125 11:50:31 blockdev_nvme_gpt -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.125 11:50:31 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:33.125 11:50:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:33.125 [2024-07-21 11:50:31.827365] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:33.125 [2024-07-21 11:50:31.827470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78683 ] 00:08:33.384 [2024-07-21 11:50:31.989150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.384 [2024-07-21 11:50:32.038157] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.006 11:50:32 blockdev_nvme_gpt -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:34.006 11:50:32 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # return 0 00:08:34.006 11:50:32 blockdev_nvme_gpt -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:08:34.006 11:50:32 blockdev_nvme_gpt -- bdev/blockdev.sh@702 -- # setup_gpt_conf 00:08:34.006 11:50:32 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:34.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:34.529 Waiting for block devices as requested 00:08:34.788 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:34.788 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:34.788 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:35.046 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:40.317 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1666 -- # local nvme bdf 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n2 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n2 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n3 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme2n3 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3c3n1 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3c3n1 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:10.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:11.0/nvme/nvme0/nvme0n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n2' '/sys/bus/pci/drivers/nvme/0000:00:12.0/nvme/nvme2/nvme2n3' '/sys/bus/pci/drivers/nvme/0000:00:13.0/nvme/nvme3/nvme3c3n1') 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # local nvme_devs nvme_dev 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@108 -- # gpt_nvme= 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # [[ -z '' ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # dev=/dev/nvme1n1 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # parted /dev/nvme1n1 -ms print 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # pt='Error: /dev/nvme1n1: unrecognised disk label 00:08:40.317 BYT; 00:08:40.317 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # [[ Error: /dev/nvme1n1: unrecognised disk label 00:08:40.317 BYT; 00:08:40.317 /dev/nvme1n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\1\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # gpt_nvme=/dev/nvme1n1 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@116 -- # break 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@119 -- # [[ -n /dev/nvme1n1 ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@125 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@128 -- # parted -s /dev/nvme1n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt_old 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@408 -- # local spdk_guid 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@410 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@412 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@413 -- # IFS='()' 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@413 -- # read -r _ spdk_guid _ 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@413 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@414 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@416 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # get_spdk_gpt 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@420 -- # local spdk_guid 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@422 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@424 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@425 -- # IFS='()' 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@425 -- # read -r _ spdk_guid _ 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@425 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@426 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:40.317 11:50:38 blockdev_nvme_gpt -- scripts/common.sh@428 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:40.317 11:50:38 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme1n1 00:08:41.251 The operation has completed successfully. 00:08:41.251 11:50:39 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme1n1 00:08:42.192 The operation has completed successfully. 00:08:42.192 11:50:40 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:42.759 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:43.694 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.694 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.694 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.694 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.694 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # rpc_cmd bdev_get_bdevs 00:08:43.694 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.694 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:43.694 [] 00:08:43.695 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.695 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@136 -- # setup_nvme_conf 00:08:43.695 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:43.695 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:43.695 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:43.973 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:43.973 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.973 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:43.973 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.973 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:08:43.974 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:43.974 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:43.974 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:43.974 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # cat 00:08:44.232 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:08:44.232 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.232 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:44.232 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.232 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:08:44.232 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.232 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:44.232 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.232 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:44.232 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.232 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:44.232 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.232 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:08:44.232 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:08:44.232 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:08:44.232 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:44.232 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:44.232 11:50:42 blockdev_nvme_gpt -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:44.232 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:08:44.233 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "0164389f-a510-4bec-a547-857e28389725"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "0164389f-a510-4bec-a547-857e28389725",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "fface2da-573a-4473-9faf-294ebeca4ede"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fface2da-573a-4473-9faf-294ebeca4ede",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "57ca853c-715c-475d-af5f-0f698ba03115"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "57ca853c-715c-475d-af5f-0f698ba03115",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "a3032798-c36f-4eef-b6db-34879f632efa"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a3032798-c36f-4eef-b6db-34879f632efa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "dccbc660-a742-457c-b743-6c8e2beaa429"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dccbc660-a742-457c-b743-6c8e2beaa429",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:44.233 11:50:42 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # jq -r .name 00:08:44.233 11:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:08:44.233 11:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1p1 00:08:44.233 11:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:08:44.233 11:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@754 -- # killprocess 78683 00:08:44.233 11:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@946 -- # '[' -z 78683 ']' 00:08:44.233 11:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # kill -0 78683 00:08:44.233 11:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # uname 00:08:44.233 11:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:44.233 11:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 78683 00:08:44.233 11:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:44.233 11:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:44.233 killing process with pid 78683 00:08:44.233 11:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # echo 'killing process with pid 78683' 00:08:44.233 11:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@965 -- # kill 78683 00:08:44.233 11:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # wait 78683 00:08:44.800 11:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:44.800 11:50:43 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:44.800 11:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:08:44.800 11:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:44.800 11:50:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:44.800 ************************************ 00:08:44.800 START TEST bdev_hello_world 00:08:44.800 ************************************ 00:08:44.800 11:50:43 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:44.800 [2024-07-21 11:50:43.518365] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:44.800 [2024-07-21 11:50:43.518480] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79299 ] 00:08:45.058 [2024-07-21 11:50:43.679055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.058 [2024-07-21 11:50:43.726283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.316 [2024-07-21 11:50:44.097307] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:45.316 [2024-07-21 11:50:44.097361] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:45.316 [2024-07-21 11:50:44.097412] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:45.316 [2024-07-21 11:50:44.099532] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:45.316 [2024-07-21 11:50:44.099924] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:45.316 [2024-07-21 11:50:44.099955] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:45.316 [2024-07-21 11:50:44.100104] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:45.316 00:08:45.316 [2024-07-21 11:50:44.100135] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:45.575 00:08:45.575 real 0m0.895s 00:08:45.575 user 0m0.589s 00:08:45.575 sys 0m0.204s 00:08:45.575 11:50:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:45.575 11:50:44 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:45.575 ************************************ 00:08:45.575 END TEST bdev_hello_world 00:08:45.575 ************************************ 00:08:45.575 11:50:44 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:08:45.575 11:50:44 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:45.575 11:50:44 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:45.575 11:50:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:45.575 ************************************ 00:08:45.575 START TEST bdev_bounds 00:08:45.575 ************************************ 00:08:45.575 11:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:08:45.575 11:50:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=79330 00:08:45.575 11:50:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:45.575 11:50:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:45.575 11:50:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 79330' 00:08:45.575 Process bdevio pid: 79330 00:08:45.575 11:50:44 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 79330 00:08:45.575 11:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 79330 ']' 00:08:45.575 11:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.575 11:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:45.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.575 11:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.575 11:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:45.575 11:50:44 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:45.833 [2024-07-21 11:50:44.483567] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:45.833 [2024-07-21 11:50:44.483685] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79330 ] 00:08:45.833 [2024-07-21 11:50:44.649039] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:46.091 [2024-07-21 11:50:44.706045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.091 [2024-07-21 11:50:44.706075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.091 [2024-07-21 11:50:44.706124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:46.662 11:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:46.662 11:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:08:46.662 11:50:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:46.662 I/O targets: 00:08:46.662 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:08:46.662 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:08:46.662 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:08:46.662 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.662 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.662 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:46.662 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:46.662 00:08:46.662 00:08:46.662 CUnit - A unit testing framework for C - Version 2.1-3 00:08:46.662 http://cunit.sourceforge.net/ 00:08:46.662 00:08:46.662 00:08:46.662 Suite: bdevio tests on: Nvme3n1 00:08:46.662 Test: blockdev write read block ...passed 00:08:46.662 Test: blockdev write zeroes read block ...passed 00:08:46.662 Test: blockdev write zeroes read no split ...passed 00:08:46.662 Test: blockdev write zeroes read split ...passed 00:08:46.662 Test: blockdev write zeroes read split partial ...passed 00:08:46.662 Test: blockdev reset ...[2024-07-21 11:50:45.453896] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:08:46.662 [2024-07-21 11:50:45.456296] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.662 passed 00:08:46.662 Test: blockdev write read 8 blocks ...passed 00:08:46.662 Test: blockdev write read size > 128k ...passed 00:08:46.662 Test: blockdev write read invalid size ...passed 00:08:46.662 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.662 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.662 Test: blockdev write read max offset ...passed 00:08:46.662 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.662 Test: blockdev writev readv 8 blocks ...passed 00:08:46.662 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.662 Test: blockdev writev readv block ...passed 00:08:46.662 Test: blockdev writev readv size > 128k ...passed 00:08:46.662 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.662 Test: blockdev comparev and writev ...[2024-07-21 11:50:45.462030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9604000 len:0x1000 00:08:46.662 [2024-07-21 11:50:45.462082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.662 passed 00:08:46.662 Test: blockdev nvme passthru rw ...passed 00:08:46.662 Test: blockdev nvme passthru vendor specific ...[2024-07-21 11:50:45.462802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.662 [2024-07-21 11:50:45.462860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.662 passed 00:08:46.662 Test: blockdev nvme admin passthru ...passed 00:08:46.662 Test: blockdev copy ...passed 00:08:46.662 Suite: bdevio tests on: Nvme2n3 00:08:46.662 Test: blockdev write read block ...passed 00:08:46.662 Test: blockdev write zeroes read block ...passed 00:08:46.662 Test: blockdev write zeroes read no split ...passed 00:08:46.662 Test: blockdev write zeroes read split ...passed 00:08:46.662 Test: blockdev write zeroes read split partial ...passed 00:08:46.662 Test: blockdev reset ...[2024-07-21 11:50:45.484532] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:46.662 [2024-07-21 11:50:45.486169] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.662 passed 00:08:46.662 Test: blockdev write read 8 blocks ...passed 00:08:46.662 Test: blockdev write read size > 128k ...passed 00:08:46.662 Test: blockdev write read invalid size ...passed 00:08:46.662 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.662 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.662 Test: blockdev write read max offset ...passed 00:08:46.662 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.662 Test: blockdev writev readv 8 blocks ...passed 00:08:46.662 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.662 Test: blockdev writev readv block ...passed 00:08:46.662 Test: blockdev writev readv size > 128k ...passed 00:08:46.662 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.662 Test: blockdev comparev and writev ...[2024-07-21 11:50:45.491869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9604000 len:0x1000 00:08:46.662 [2024-07-21 11:50:45.491918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.662 passed 00:08:46.662 Test: blockdev nvme passthru rw ...passed 00:08:46.662 Test: blockdev nvme passthru vendor specific ...[2024-07-21 11:50:45.492655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.662 [2024-07-21 11:50:45.492689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.662 passed 00:08:46.662 Test: blockdev nvme admin passthru ...passed 00:08:46.662 Test: blockdev copy ...passed 00:08:46.662 Suite: bdevio tests on: Nvme2n2 00:08:46.662 Test: blockdev write read block ...passed 00:08:46.662 Test: blockdev write zeroes read block ...passed 00:08:46.662 Test: blockdev write zeroes read no split ...passed 00:08:46.662 Test: blockdev write zeroes read split ...passed 00:08:46.662 Test: blockdev write zeroes read split partial ...passed 00:08:46.662 Test: blockdev reset ...[2024-07-21 11:50:45.513621] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:46.662 [2024-07-21 11:50:45.515687] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.662 passed 00:08:46.662 Test: blockdev write read 8 blocks ...passed 00:08:46.662 Test: blockdev write read size > 128k ...passed 00:08:46.662 Test: blockdev write read invalid size ...passed 00:08:46.663 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.663 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.663 Test: blockdev write read max offset ...passed 00:08:46.663 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.663 Test: blockdev writev readv 8 blocks ...passed 00:08:46.663 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.663 Test: blockdev writev readv block ...passed 00:08:46.663 Test: blockdev writev readv size > 128k ...passed 00:08:46.663 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.663 Test: blockdev comparev and writev ...[2024-07-21 11:50:45.521250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cca22000 len:0x1000 00:08:46.663 [2024-07-21 11:50:45.521304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.663 passed 00:08:46.663 Test: blockdev nvme passthru rw ...passed 00:08:46.663 Test: blockdev nvme passthru vendor specific ...[2024-07-21 11:50:45.521892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.663 [2024-07-21 11:50:45.521932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.663 passed 00:08:46.663 Test: blockdev nvme admin passthru ...passed 00:08:46.663 Test: blockdev copy ...passed 00:08:46.663 Suite: bdevio tests on: Nvme2n1 00:08:46.663 Test: blockdev write read block ...passed 00:08:46.663 Test: blockdev write zeroes read block ...passed 00:08:46.922 Test: blockdev write zeroes read no split ...passed 00:08:46.922 Test: blockdev write zeroes read split ...passed 00:08:46.922 Test: blockdev write zeroes read split partial ...passed 00:08:46.922 Test: blockdev reset ...[2024-07-21 11:50:45.542712] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:08:46.922 [2024-07-21 11:50:45.544976] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.922 passed 00:08:46.922 Test: blockdev write read 8 blocks ...passed 00:08:46.922 Test: blockdev write read size > 128k ...passed 00:08:46.922 Test: blockdev write read invalid size ...passed 00:08:46.922 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.922 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.922 Test: blockdev write read max offset ...passed 00:08:46.922 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.922 Test: blockdev writev readv 8 blocks ...passed 00:08:46.922 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.922 Test: blockdev writev readv block ...passed 00:08:46.922 Test: blockdev writev readv size > 128k ...passed 00:08:46.922 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.922 Test: blockdev comparev and writev ...[2024-07-21 11:50:45.550961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c960d000 len:0x1000 00:08:46.922 [2024-07-21 11:50:45.551012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.922 passed 00:08:46.922 Test: blockdev nvme passthru rw ...passed 00:08:46.922 Test: blockdev nvme passthru vendor specific ...[2024-07-21 11:50:45.551656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.922 [2024-07-21 11:50:45.551690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.922 passed 00:08:46.922 Test: blockdev nvme admin passthru ...passed 00:08:46.922 Test: blockdev copy ...passed 00:08:46.922 Suite: bdevio tests on: Nvme1n1 00:08:46.922 Test: blockdev write read block ...passed 00:08:46.922 Test: blockdev write zeroes read block ...passed 00:08:46.922 Test: blockdev write zeroes read no split ...passed 00:08:46.922 Test: blockdev write zeroes read split ...passed 00:08:46.922 Test: blockdev write zeroes read split partial ...passed 00:08:46.922 Test: blockdev reset ...[2024-07-21 11:50:45.572791] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:08:46.922 [2024-07-21 11:50:45.574943] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.922 passed 00:08:46.922 Test: blockdev write read 8 blocks ...passed 00:08:46.922 Test: blockdev write read size > 128k ...passed 00:08:46.922 Test: blockdev write read invalid size ...passed 00:08:46.922 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.922 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.922 Test: blockdev write read max offset ...passed 00:08:46.922 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.922 Test: blockdev writev readv 8 blocks ...passed 00:08:46.922 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.922 Test: blockdev writev readv block ...passed 00:08:46.922 Test: blockdev writev readv size > 128k ...passed 00:08:46.922 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.922 Test: blockdev comparev and writev ...[2024-07-21 11:50:45.581091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9232000 len:0x1000 00:08:46.922 [2024-07-21 11:50:45.581144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:46.922 passed 00:08:46.922 Test: blockdev nvme passthru rw ...passed 00:08:46.922 Test: blockdev nvme passthru vendor specific ...[2024-07-21 11:50:45.581748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:46.922 [2024-07-21 11:50:45.581782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:46.922 passed 00:08:46.922 Test: blockdev nvme admin passthru ...passed 00:08:46.922 Test: blockdev copy ...passed 00:08:46.922 Suite: bdevio tests on: Nvme0n1p2 00:08:46.922 Test: blockdev write read block ...passed 00:08:46.922 Test: blockdev write zeroes read block ...passed 00:08:46.922 Test: blockdev write zeroes read no split ...passed 00:08:46.922 Test: blockdev write zeroes read split ...passed 00:08:46.922 Test: blockdev write zeroes read split partial ...passed 00:08:46.923 Test: blockdev reset ...[2024-07-21 11:50:45.606395] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:46.923 [2024-07-21 11:50:45.608254] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.923 passed 00:08:46.923 Test: blockdev write read 8 blocks ...passed 00:08:46.923 Test: blockdev write read size > 128k ...passed 00:08:46.923 Test: blockdev write read invalid size ...passed 00:08:46.923 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.923 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.923 Test: blockdev write read max offset ...passed 00:08:46.923 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.923 Test: blockdev writev readv 8 blocks ...passed 00:08:46.923 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.923 Test: blockdev writev readv block ...passed 00:08:46.923 Test: blockdev writev readv size > 128k ...passed 00:08:46.923 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.923 Test: blockdev comparev and writev ...[2024-07-21 11:50:45.613995] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:08:46.923 separate metadata which is not supported yet. 00:08:46.923 passed 00:08:46.923 Test: blockdev nvme passthru rw ...passed 00:08:46.923 Test: blockdev nvme passthru vendor specific ...passed 00:08:46.923 Test: blockdev nvme admin passthru ...passed 00:08:46.923 Test: blockdev copy ...passed 00:08:46.923 Suite: bdevio tests on: Nvme0n1p1 00:08:46.923 Test: blockdev write read block ...passed 00:08:46.923 Test: blockdev write zeroes read block ...passed 00:08:46.923 Test: blockdev write zeroes read no split ...passed 00:08:46.923 Test: blockdev write zeroes read split ...passed 00:08:46.923 Test: blockdev write zeroes read split partial ...passed 00:08:46.923 Test: blockdev reset ...[2024-07-21 11:50:45.634191] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:08:46.923 [2024-07-21 11:50:45.636015] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:46.923 passed 00:08:46.923 Test: blockdev write read 8 blocks ...passed 00:08:46.923 Test: blockdev write read size > 128k ...passed 00:08:46.923 Test: blockdev write read invalid size ...passed 00:08:46.923 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:46.923 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:46.923 Test: blockdev write read max offset ...passed 00:08:46.923 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:46.923 Test: blockdev writev readv 8 blocks ...passed 00:08:46.923 Test: blockdev writev readv 30 x 1block ...passed 00:08:46.923 Test: blockdev writev readv block ...passed 00:08:46.923 Test: blockdev writev readv size > 128k ...passed 00:08:46.923 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:46.923 Test: blockdev comparev and writev ...[2024-07-21 11:50:45.641939] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:08:46.923 separate metadata which is not supported yet. 00:08:46.923 passed 00:08:46.923 Test: blockdev nvme passthru rw ...passed 00:08:46.923 Test: blockdev nvme passthru vendor specific ...passed 00:08:46.923 Test: blockdev nvme admin passthru ...passed 00:08:46.923 Test: blockdev copy ...passed 00:08:46.923 00:08:46.923 Run Summary: Type Total Ran Passed Failed Inactive 00:08:46.923 suites 7 7 n/a 0 0 00:08:46.923 tests 161 161 161 0 0 00:08:46.923 asserts 1006 1006 1006 0 n/a 00:08:46.923 00:08:46.923 Elapsed time = 0.504 seconds 00:08:46.923 0 00:08:46.923 11:50:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 79330 00:08:46.923 11:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 79330 ']' 00:08:46.923 11:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 79330 00:08:46.923 11:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:08:46.923 11:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:46.923 11:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79330 00:08:46.923 11:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:46.923 11:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:46.923 killing process with pid 79330 00:08:46.923 11:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79330' 00:08:46.923 11:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@965 -- # kill 79330 00:08:46.923 11:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # wait 79330 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:08:47.182 00:08:47.182 real 0m1.527s 00:08:47.182 user 0m3.699s 00:08:47.182 sys 0m0.379s 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:47.182 ************************************ 00:08:47.182 END TEST bdev_bounds 00:08:47.182 ************************************ 00:08:47.182 11:50:45 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:47.182 11:50:45 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:08:47.182 11:50:45 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:47.182 11:50:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:47.182 ************************************ 00:08:47.182 START TEST bdev_nbd 00:08:47.182 ************************************ 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=7 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=7 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=79378 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:47.182 11:50:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:47.182 11:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 79378 /var/tmp/spdk-nbd.sock 00:08:47.182 11:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 79378 ']' 00:08:47.182 11:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:47.182 11:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:47.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:47.182 11:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:47.182 11:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:47.182 11:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:47.440 [2024-07-21 11:50:46.083099] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:47.440 [2024-07-21 11:50:46.083263] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.440 [2024-07-21 11:50:46.250499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.440 [2024-07-21 11:50:46.303185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:48.374 11:50:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.374 1+0 records in 00:08:48.374 1+0 records out 00:08:48.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000769405 s, 5.3 MB/s 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:48.374 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.633 1+0 records in 00:08:48.633 1+0 records out 00:08:48.633 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422189 s, 9.7 MB/s 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:48.633 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:08:48.891 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:48.891 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:48.891 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:48.891 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:08:48.891 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:48.891 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:48.891 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:48.891 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:08:48.891 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:48.891 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:48.891 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:48.892 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:48.892 1+0 records in 00:08:48.892 1+0 records out 00:08:48.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000721927 s, 5.7 MB/s 00:08:48.892 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:08:49.150 11:50:47 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:49.150 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:49.150 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:49.150 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.150 1+0 records in 00:08:49.150 1+0 records out 00:08:49.150 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045493 s, 9.0 MB/s 00:08:49.150 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.150 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:49.150 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.150 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.409 1+0 records in 00:08:49.409 1+0 records out 00:08:49.409 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000676676 s, 6.1 MB/s 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:49.409 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:49.668 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:49.668 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:49.668 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:49.668 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:08:49.668 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:49.668 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:49.668 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:49.668 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:08:49.668 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:49.668 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:49.668 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:49.668 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:49.668 1+0 records in 00:08:49.668 1+0 records out 00:08:49.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569137 s, 7.2 MB/s 00:08:49.927 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.927 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:49.927 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:49.927 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:49.927 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:49.927 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:49.927 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:49.927 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd6 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd6 /proc/partitions 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:50.187 1+0 records in 00:08:50.187 1+0 records out 00:08:50.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000849624 s, 4.8 MB/s 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:50.187 11:50:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:50.447 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:50.447 { 00:08:50.447 "nbd_device": "/dev/nbd0", 00:08:50.447 "bdev_name": "Nvme0n1p1" 00:08:50.447 }, 00:08:50.447 { 00:08:50.447 "nbd_device": "/dev/nbd1", 00:08:50.447 "bdev_name": "Nvme0n1p2" 00:08:50.447 }, 00:08:50.447 { 00:08:50.447 "nbd_device": "/dev/nbd2", 00:08:50.447 "bdev_name": "Nvme1n1" 00:08:50.447 }, 00:08:50.447 { 00:08:50.447 "nbd_device": "/dev/nbd3", 00:08:50.447 "bdev_name": "Nvme2n1" 00:08:50.447 }, 00:08:50.447 { 00:08:50.447 "nbd_device": "/dev/nbd4", 00:08:50.447 "bdev_name": "Nvme2n2" 00:08:50.447 }, 00:08:50.447 { 00:08:50.447 "nbd_device": "/dev/nbd5", 00:08:50.447 "bdev_name": "Nvme2n3" 00:08:50.447 }, 00:08:50.447 { 00:08:50.447 "nbd_device": "/dev/nbd6", 00:08:50.447 "bdev_name": "Nvme3n1" 00:08:50.447 } 00:08:50.447 ]' 00:08:50.447 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:50.447 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:50.447 { 00:08:50.447 "nbd_device": "/dev/nbd0", 00:08:50.447 "bdev_name": "Nvme0n1p1" 00:08:50.447 }, 00:08:50.447 { 00:08:50.447 "nbd_device": "/dev/nbd1", 00:08:50.447 "bdev_name": "Nvme0n1p2" 00:08:50.447 }, 00:08:50.447 { 00:08:50.447 "nbd_device": "/dev/nbd2", 00:08:50.447 "bdev_name": "Nvme1n1" 00:08:50.447 }, 00:08:50.447 { 00:08:50.447 "nbd_device": "/dev/nbd3", 00:08:50.447 "bdev_name": "Nvme2n1" 00:08:50.447 }, 00:08:50.447 { 00:08:50.447 "nbd_device": "/dev/nbd4", 00:08:50.447 "bdev_name": "Nvme2n2" 00:08:50.447 }, 00:08:50.447 { 00:08:50.447 "nbd_device": "/dev/nbd5", 00:08:50.447 "bdev_name": "Nvme2n3" 00:08:50.447 }, 00:08:50.447 { 00:08:50.447 "nbd_device": "/dev/nbd6", 00:08:50.447 "bdev_name": "Nvme3n1" 00:08:50.447 } 00:08:50.447 ]' 00:08:50.447 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:50.447 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:50.447 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:50.447 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:50.447 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:50.447 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:50.447 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.447 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:50.706 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:50.706 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:50.706 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:50.706 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.706 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.706 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:50.706 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.706 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.706 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.706 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:50.964 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:50.964 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:50.964 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:50.964 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:50.964 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:50.964 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:50.964 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:50.964 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:50.964 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:50.964 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:51.224 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:51.224 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:51.224 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:51.224 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.224 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.224 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:51.224 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.224 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.224 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.224 11:50:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:51.483 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:51.483 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:51.483 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:51.483 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.483 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.483 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:51.483 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.483 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.483 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.483 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:51.742 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:51.742 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:51.742 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:51.742 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:51.742 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:51.742 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:51.742 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:51.742 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:51.742 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:51.742 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.002 11:50:50 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:52.261 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:52.262 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:52.262 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:52.262 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:52.262 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:52.262 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:08:52.530 /dev/nbd0 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.530 1+0 records in 00:08:52.530 1+0 records out 00:08:52.530 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067588 s, 6.1 MB/s 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:52.530 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:08:52.802 /dev/nbd1 00:08:52.802 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:52.802 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:52.802 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:08:52.802 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:52.802 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:52.802 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:52.802 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:08:52.802 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:52.802 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:52.802 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:52.802 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:52.802 1+0 records in 00:08:52.802 1+0 records out 00:08:52.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415999 s, 9.8 MB/s 00:08:52.802 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.803 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:52.803 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:52.803 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:52.803 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:52.803 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:52.803 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:52.803 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:08:53.062 /dev/nbd10 00:08:53.062 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:53.062 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:53.062 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:08:53.062 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:53.062 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:53.063 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:53.063 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:08:53.063 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:53.063 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:53.063 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:53.063 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:53.063 1+0 records in 00:08:53.063 1+0 records out 00:08:53.063 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752281 s, 5.4 MB/s 00:08:53.063 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.063 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:53.063 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.063 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:53.063 11:50:51 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:53.063 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:53.063 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:53.063 11:50:51 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:53.323 /dev/nbd11 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:53.323 1+0 records in 00:08:53.323 1+0 records out 00:08:53.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000599768 s, 6.8 MB/s 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:53.323 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:53.583 /dev/nbd12 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:53.583 1+0 records in 00:08:53.583 1+0 records out 00:08:53.583 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570837 s, 7.2 MB/s 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:53.583 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:53.843 /dev/nbd13 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:53.843 1+0 records in 00:08:53.843 1+0 records out 00:08:53.843 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734499 s, 5.6 MB/s 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:53.843 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:54.103 /dev/nbd14 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd14 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd14 /proc/partitions 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:54.103 1+0 records in 00:08:54.103 1+0 records out 00:08:54.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000807116 s, 5.1 MB/s 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:54.103 11:50:52 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:54.362 { 00:08:54.362 "nbd_device": "/dev/nbd0", 00:08:54.362 "bdev_name": "Nvme0n1p1" 00:08:54.362 }, 00:08:54.362 { 00:08:54.362 "nbd_device": "/dev/nbd1", 00:08:54.362 "bdev_name": "Nvme0n1p2" 00:08:54.362 }, 00:08:54.362 { 00:08:54.362 "nbd_device": "/dev/nbd10", 00:08:54.362 "bdev_name": "Nvme1n1" 00:08:54.362 }, 00:08:54.362 { 00:08:54.362 "nbd_device": "/dev/nbd11", 00:08:54.362 "bdev_name": "Nvme2n1" 00:08:54.362 }, 00:08:54.362 { 00:08:54.362 "nbd_device": "/dev/nbd12", 00:08:54.362 "bdev_name": "Nvme2n2" 00:08:54.362 }, 00:08:54.362 { 00:08:54.362 "nbd_device": "/dev/nbd13", 00:08:54.362 "bdev_name": "Nvme2n3" 00:08:54.362 }, 00:08:54.362 { 00:08:54.362 "nbd_device": "/dev/nbd14", 00:08:54.362 "bdev_name": "Nvme3n1" 00:08:54.362 } 00:08:54.362 ]' 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:54.362 { 00:08:54.362 "nbd_device": "/dev/nbd0", 00:08:54.362 "bdev_name": "Nvme0n1p1" 00:08:54.362 }, 00:08:54.362 { 00:08:54.362 "nbd_device": "/dev/nbd1", 00:08:54.362 "bdev_name": "Nvme0n1p2" 00:08:54.362 }, 00:08:54.362 { 00:08:54.362 "nbd_device": "/dev/nbd10", 00:08:54.362 "bdev_name": "Nvme1n1" 00:08:54.362 }, 00:08:54.362 { 00:08:54.362 "nbd_device": "/dev/nbd11", 00:08:54.362 "bdev_name": "Nvme2n1" 00:08:54.362 }, 00:08:54.362 { 00:08:54.362 "nbd_device": "/dev/nbd12", 00:08:54.362 "bdev_name": "Nvme2n2" 00:08:54.362 }, 00:08:54.362 { 00:08:54.362 "nbd_device": "/dev/nbd13", 00:08:54.362 "bdev_name": "Nvme2n3" 00:08:54.362 }, 00:08:54.362 { 00:08:54.362 "nbd_device": "/dev/nbd14", 00:08:54.362 "bdev_name": "Nvme3n1" 00:08:54.362 } 00:08:54.362 ]' 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:54.362 /dev/nbd1 00:08:54.362 /dev/nbd10 00:08:54.362 /dev/nbd11 00:08:54.362 /dev/nbd12 00:08:54.362 /dev/nbd13 00:08:54.362 /dev/nbd14' 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:54.362 /dev/nbd1 00:08:54.362 /dev/nbd10 00:08:54.362 /dev/nbd11 00:08:54.362 /dev/nbd12 00:08:54.362 /dev/nbd13 00:08:54.362 /dev/nbd14' 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:54.362 256+0 records in 00:08:54.362 256+0 records out 00:08:54.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137738 s, 76.1 MB/s 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.362 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:54.621 256+0 records in 00:08:54.621 256+0 records out 00:08:54.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.102959 s, 10.2 MB/s 00:08:54.621 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.621 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:54.621 256+0 records in 00:08:54.621 256+0 records out 00:08:54.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.105752 s, 9.9 MB/s 00:08:54.621 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.621 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:54.880 256+0 records in 00:08:54.880 256+0 records out 00:08:54.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.107217 s, 9.8 MB/s 00:08:54.880 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.880 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:54.880 256+0 records in 00:08:54.880 256+0 records out 00:08:54.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.110668 s, 9.5 MB/s 00:08:54.880 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:54.880 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:55.139 256+0 records in 00:08:55.139 256+0 records out 00:08:55.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.110533 s, 9.5 MB/s 00:08:55.139 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:55.139 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:55.139 256+0 records in 00:08:55.139 256+0 records out 00:08:55.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10996 s, 9.5 MB/s 00:08:55.139 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:55.139 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:55.139 256+0 records in 00:08:55.139 256+0 records out 00:08:55.139 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.100103 s, 10.5 MB/s 00:08:55.139 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:55.139 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:55.139 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:55.139 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:55.139 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:55.139 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:55.139 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:55.139 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.139 11:50:53 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:55.139 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.139 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.398 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:55.656 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:55.657 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:55.657 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:55.657 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.657 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.657 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:55.657 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:55.657 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.657 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.657 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:55.657 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:55.916 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:56.173 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:56.173 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:56.173 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:56.173 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.173 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.173 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:56.173 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:56.173 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.173 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.173 11:50:54 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:56.435 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:56.435 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:56.435 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:56.435 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.435 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.435 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:56.435 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:56.435 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.435 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.435 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:56.693 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:56.693 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:56.693 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:56.693 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.693 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.693 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:56.693 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:56.693 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.693 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:56.693 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:56.951 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:56.951 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:56.951 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:56.951 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:56.951 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:56.951 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:56.951 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:56.951 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:56.951 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:56.951 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:56.951 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:57.210 11:50:55 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:57.468 malloc_lvol_verify 00:08:57.468 11:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:57.726 0ff1c148-146a-4000-b622-2d5313084eb3 00:08:57.726 11:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:57.984 a389c9a3-df90-43ec-bfb3-4a6ff47e82a3 00:08:57.984 11:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:58.243 /dev/nbd0 00:08:58.243 11:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:58.243 mke2fs 1.46.5 (30-Dec-2021) 00:08:58.243 Discarding device blocks: 0/4096 done 00:08:58.243 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:58.243 00:08:58.243 Allocating group tables: 0/1 done 00:08:58.243 Writing inode tables: 0/1 done 00:08:58.243 Creating journal (1024 blocks): done 00:08:58.243 Writing superblocks and filesystem accounting information: 0/1 done 00:08:58.243 00:08:58.243 11:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:58.243 11:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:58.243 11:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:58.243 11:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:58.243 11:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:58.243 11:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:58.243 11:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:58.243 11:50:56 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 79378 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 79378 ']' 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 79378 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 79378 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:08:58.503 killing process with pid 79378 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 79378' 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@965 -- # kill 79378 00:08:58.503 11:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # wait 79378 00:08:58.763 11:50:57 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:08:58.763 00:08:58.763 real 0m11.492s 00:08:58.763 user 0m16.349s 00:08:58.763 sys 0m4.463s 00:08:58.763 11:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:58.763 11:50:57 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:58.763 ************************************ 00:08:58.763 END TEST bdev_nbd 00:08:58.763 ************************************ 00:08:58.763 11:50:57 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:08:58.763 11:50:57 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = nvme ']' 00:08:58.763 11:50:57 blockdev_nvme_gpt -- bdev/blockdev.sh@764 -- # '[' gpt = gpt ']' 00:08:58.763 skipping fio tests on NVMe due to multi-ns failures. 00:08:58.763 11:50:57 blockdev_nvme_gpt -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:58.763 11:50:57 blockdev_nvme_gpt -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:58.763 11:50:57 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:58.763 11:50:57 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:08:58.763 11:50:57 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:58.763 11:50:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:58.763 ************************************ 00:08:58.763 START TEST bdev_verify 00:08:58.763 ************************************ 00:08:58.763 11:50:57 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:59.022 [2024-07-21 11:50:57.636727] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:08:59.022 [2024-07-21 11:50:57.636866] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79797 ] 00:08:59.022 [2024-07-21 11:50:57.806050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:59.022 [2024-07-21 11:50:57.859370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.022 [2024-07-21 11:50:57.859471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.590 Running I/O for 5 seconds... 00:09:04.874 00:09:04.874 Latency(us) 00:09:04.874 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.874 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.874 Verification LBA range: start 0x0 length 0x5e800 00:09:04.874 Nvme0n1p1 : 5.05 1292.98 5.05 0.00 0.00 98662.82 19803.89 91120.80 00:09:04.874 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.874 Verification LBA range: start 0x5e800 length 0x5e800 00:09:04.874 Nvme0n1p1 : 5.07 1288.63 5.03 0.00 0.00 98998.22 20948.63 90205.01 00:09:04.874 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.874 Verification LBA range: start 0x0 length 0x5e7ff 00:09:04.874 Nvme0n1p2 : 5.05 1292.50 5.05 0.00 0.00 98471.82 22436.78 90205.01 00:09:04.874 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.874 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:04.874 Nvme0n1p2 : 5.07 1288.19 5.03 0.00 0.00 98806.86 24268.35 86083.97 00:09:04.874 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.874 Verification LBA range: start 0x0 length 0xa0000 00:09:04.874 Nvme1n1 : 5.05 1292.09 5.05 0.00 0.00 98315.55 24268.35 88373.44 00:09:04.874 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.874 Verification LBA range: start 0xa0000 length 0xa0000 00:09:04.874 Nvme1n1 : 5.07 1287.79 5.03 0.00 0.00 98584.92 26672.29 87457.65 00:09:04.874 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.874 Verification LBA range: start 0x0 length 0x80000 00:09:04.874 Nvme2n1 : 5.07 1299.45 5.08 0.00 0.00 97558.79 4292.75 88373.44 00:09:04.874 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.874 Verification LBA range: start 0x80000 length 0x80000 00:09:04.874 Nvme2n1 : 5.07 1287.39 5.03 0.00 0.00 98406.04 26786.77 87915.54 00:09:04.874 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.874 Verification LBA range: start 0x0 length 0x80000 00:09:04.874 Nvme2n2 : 5.09 1308.49 5.11 0.00 0.00 96851.43 10874.97 86999.76 00:09:04.874 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.874 Verification LBA range: start 0x80000 length 0x80000 00:09:04.874 Nvme2n2 : 5.09 1295.90 5.06 0.00 0.00 97592.73 4349.99 88373.44 00:09:04.874 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.874 Verification LBA range: start 0x0 length 0x80000 00:09:04.874 Nvme2n3 : 5.09 1307.72 5.11 0.00 0.00 96722.29 10760.50 88831.33 00:09:04.874 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.874 Verification LBA range: start 0x80000 length 0x80000 00:09:04.874 Nvme2n3 : 5.10 1305.78 5.10 0.00 0.00 96789.01 6410.51 88373.44 00:09:04.874 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:04.874 Verification LBA range: start 0x0 length 0x20000 00:09:04.874 Nvme3n1 : 5.09 1307.34 5.11 0.00 0.00 96579.52 10703.26 92036.58 00:09:04.874 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:04.874 Verification LBA range: start 0x20000 length 0x20000 00:09:04.874 Nvme3n1 : 5.10 1305.42 5.10 0.00 0.00 96660.75 6553.60 88831.33 00:09:04.874 =================================================================================================================== 00:09:04.874 Total : 18159.68 70.94 0.00 0.00 97778.08 4292.75 92036.58 00:09:05.132 00:09:05.132 real 0m6.402s 00:09:05.132 user 0m11.905s 00:09:05.132 sys 0m0.286s 00:09:05.132 11:51:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:05.132 11:51:03 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:09:05.132 ************************************ 00:09:05.132 END TEST bdev_verify 00:09:05.132 ************************************ 00:09:05.390 11:51:04 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:05.390 11:51:04 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:09:05.390 11:51:04 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:05.390 11:51:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:05.390 ************************************ 00:09:05.390 START TEST bdev_verify_big_io 00:09:05.390 ************************************ 00:09:05.390 11:51:04 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:05.390 [2024-07-21 11:51:04.103064] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:05.390 [2024-07-21 11:51:04.103191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79886 ] 00:09:05.648 [2024-07-21 11:51:04.271667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:05.648 [2024-07-21 11:51:04.324496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.648 [2024-07-21 11:51:04.324588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.213 Running I/O for 5 seconds... 00:09:12.787 00:09:12.787 Latency(us) 00:09:12.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.787 Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.787 Verification LBA range: start 0x0 length 0x5e80 00:09:12.787 Nvme0n1p1 : 5.78 108.00 6.75 0.00 0.00 1120401.98 52886.69 1201512.41 00:09:12.787 Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.787 Verification LBA range: start 0x5e80 length 0x5e80 00:09:12.787 Nvme0n1p1 : 5.84 115.00 7.19 0.00 0.00 1075942.90 24039.41 1428627.56 00:09:12.787 Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.787 Verification LBA range: start 0x0 length 0x5e7f 00:09:12.787 Nvme0n1p2 : 5.78 94.12 5.88 0.00 0.00 1263035.95 119052.30 1948794.52 00:09:12.787 Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.787 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:12.787 Nvme0n1p2 : 5.77 113.96 7.12 0.00 0.00 1041363.83 97531.30 1150228.35 00:09:12.787 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.787 Verification LBA range: start 0x0 length 0xa000 00:09:12.787 Nvme1n1 : 5.78 113.97 7.12 0.00 0.00 1029647.07 101652.35 1003702.44 00:09:12.787 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.787 Verification LBA range: start 0xa000 length 0xa000 00:09:12.787 Nvme1n1 : 5.84 117.00 7.31 0.00 0.00 1006814.17 61357.72 1164880.94 00:09:12.787 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.787 Verification LBA range: start 0x0 length 0x8000 00:09:12.787 Nvme2n1 : 5.78 114.11 7.13 0.00 0.00 1006175.76 130957.53 1018355.03 00:09:12.787 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.787 Verification LBA range: start 0x8000 length 0x8000 00:09:12.787 Nvme2n1 : 5.88 117.45 7.34 0.00 0.00 981095.11 17399.95 1765637.14 00:09:12.788 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.788 Verification LBA range: start 0x0 length 0x8000 00:09:12.788 Nvme2n2 : 5.85 126.13 7.88 0.00 0.00 910325.06 13107.20 1076965.39 00:09:12.788 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.788 Verification LBA range: start 0x8000 length 0x8000 00:09:12.788 Nvme2n2 : 5.88 117.26 7.33 0.00 0.00 956998.06 17857.84 1787616.03 00:09:12.788 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.788 Verification LBA range: start 0x0 length 0x8000 00:09:12.788 Nvme2n3 : 5.85 127.63 7.98 0.00 0.00 873706.71 13736.80 1091617.98 00:09:12.788 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.788 Verification LBA range: start 0x8000 length 0x8000 00:09:12.788 Nvme2n3 : 5.88 122.14 7.63 0.00 0.00 898671.70 16255.22 1809594.91 00:09:12.788 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:12.788 Verification LBA range: start 0x0 length 0x2000 00:09:12.788 Nvme3n1 : 5.86 135.19 8.45 0.00 0.00 804288.74 2303.78 1164880.94 00:09:12.788 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:12.788 Verification LBA range: start 0x2000 length 0x2000 00:09:12.788 Nvme3n1 : 5.93 148.34 9.27 0.00 0.00 724367.87 937.25 1831573.80 00:09:12.788 =================================================================================================================== 00:09:12.788 Total : 1670.30 104.39 0.00 0.00 964256.91 937.25 1948794.52 00:09:12.788 00:09:12.788 real 0m7.372s 00:09:12.788 user 0m13.785s 00:09:12.788 sys 0m0.320s 00:09:12.788 11:51:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:12.788 11:51:11 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:09:12.788 ************************************ 00:09:12.788 END TEST bdev_verify_big_io 00:09:12.788 ************************************ 00:09:12.788 11:51:11 blockdev_nvme_gpt -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:12.788 11:51:11 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:12.788 11:51:11 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:12.788 11:51:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:12.788 ************************************ 00:09:12.788 START TEST bdev_write_zeroes 00:09:12.788 ************************************ 00:09:12.788 11:51:11 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:12.788 [2024-07-21 11:51:11.541626] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:12.788 [2024-07-21 11:51:11.541755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79979 ] 00:09:13.046 [2024-07-21 11:51:11.707584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.046 [2024-07-21 11:51:11.758552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.304 Running I/O for 1 seconds... 00:09:14.671 00:09:14.671 Latency(us) 00:09:14.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.671 Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:14.671 Nvme0n1p1 : 1.02 8294.75 32.40 0.00 0.00 15378.05 11103.92 32968.33 00:09:14.671 Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:14.671 Nvme0n1p2 : 1.02 8284.42 32.36 0.00 0.00 15371.00 11390.10 33197.28 00:09:14.671 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:14.671 Nvme1n1 : 1.02 8275.14 32.32 0.00 0.00 15325.02 11676.28 30678.86 00:09:14.671 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:14.671 Nvme2n1 : 1.02 8308.18 32.45 0.00 0.00 15194.06 8928.92 24840.72 00:09:14.671 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:14.671 Nvme2n2 : 1.03 8298.64 32.42 0.00 0.00 15183.26 9444.05 24382.83 00:09:14.671 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:14.671 Nvme2n3 : 1.03 8341.95 32.59 0.00 0.00 15052.01 4693.41 21978.89 00:09:14.671 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:14.671 Nvme3n1 : 1.03 8332.66 32.55 0.00 0.00 15035.09 4893.74 21978.89 00:09:14.671 =================================================================================================================== 00:09:14.671 Total : 58135.73 227.09 0.00 0.00 15218.96 4693.41 33197.28 00:09:14.671 00:09:14.671 real 0m2.008s 00:09:14.671 user 0m1.663s 00:09:14.671 sys 0m0.234s 00:09:14.671 11:51:13 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:14.671 11:51:13 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:09:14.671 ************************************ 00:09:14.671 END TEST bdev_write_zeroes 00:09:14.671 ************************************ 00:09:14.671 11:51:13 blockdev_nvme_gpt -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:14.671 11:51:13 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:14.671 11:51:13 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:14.671 11:51:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:14.671 ************************************ 00:09:14.671 START TEST bdev_json_nonenclosed 00:09:14.671 ************************************ 00:09:14.672 11:51:13 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:14.929 [2024-07-21 11:51:13.607927] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:14.929 [2024-07-21 11:51:13.608067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80021 ] 00:09:14.929 [2024-07-21 11:51:13.773413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.187 [2024-07-21 11:51:13.823416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.187 [2024-07-21 11:51:13.823517] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:15.187 [2024-07-21 11:51:13.823542] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:15.187 [2024-07-21 11:51:13.823554] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:15.187 00:09:15.187 real 0m0.426s 00:09:15.187 user 0m0.189s 00:09:15.187 sys 0m0.133s 00:09:15.187 11:51:13 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:15.187 11:51:13 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:09:15.187 ************************************ 00:09:15.187 END TEST bdev_json_nonenclosed 00:09:15.187 ************************************ 00:09:15.187 11:51:13 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:15.187 11:51:13 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:09:15.187 11:51:13 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:15.187 11:51:13 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:15.187 ************************************ 00:09:15.187 START TEST bdev_json_nonarray 00:09:15.187 ************************************ 00:09:15.187 11:51:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:15.446 [2024-07-21 11:51:14.091590] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:15.446 [2024-07-21 11:51:14.091725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80052 ] 00:09:15.446 [2024-07-21 11:51:14.256505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.446 [2024-07-21 11:51:14.305777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.446 [2024-07-21 11:51:14.305894] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:15.446 [2024-07-21 11:51:14.305921] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:09:15.446 [2024-07-21 11:51:14.305933] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:15.704 00:09:15.704 real 0m0.412s 00:09:15.704 user 0m0.193s 00:09:15.704 sys 0m0.116s 00:09:15.704 11:51:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:15.704 11:51:14 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:09:15.704 ************************************ 00:09:15.704 END TEST bdev_json_nonarray 00:09:15.704 ************************************ 00:09:15.704 11:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # [[ gpt == bdev ]] 00:09:15.704 11:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # [[ gpt == gpt ]] 00:09:15.704 11:51:14 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:15.704 11:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:15.704 11:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:15.704 11:51:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:15.704 ************************************ 00:09:15.704 START TEST bdev_gpt_uuid 00:09:15.704 ************************************ 00:09:15.704 11:51:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1121 -- # bdev_gpt_uuid 00:09:15.704 11:51:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@614 -- # local bdev 00:09:15.704 11:51:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@616 -- # start_spdk_tgt 00:09:15.705 11:51:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:15.705 11:51:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80072 00:09:15.705 11:51:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:15.705 11:51:14 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 80072 00:09:15.705 11:51:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@827 -- # '[' -z 80072 ']' 00:09:15.705 11:51:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.705 11:51:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:15.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.705 11:51:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.705 11:51:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:15.705 11:51:14 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:15.963 [2024-07-21 11:51:14.583302] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:15.963 [2024-07-21 11:51:14.583538] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80072 ] 00:09:15.963 [2024-07-21 11:51:14.763692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.963 [2024-07-21 11:51:14.813613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # return 0 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:16.900 Some configs were skipped because the RPC state that can call them passed over. 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_wait_for_examine 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # bdev='[ 00:09:16.900 { 00:09:16.900 "name": "Nvme0n1p1", 00:09:16.900 "aliases": [ 00:09:16.900 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:16.900 ], 00:09:16.900 "product_name": "GPT Disk", 00:09:16.900 "block_size": 4096, 00:09:16.900 "num_blocks": 774144, 00:09:16.900 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:16.900 "md_size": 64, 00:09:16.900 "md_interleave": false, 00:09:16.900 "dif_type": 0, 00:09:16.900 "assigned_rate_limits": { 00:09:16.900 "rw_ios_per_sec": 0, 00:09:16.900 "rw_mbytes_per_sec": 0, 00:09:16.900 "r_mbytes_per_sec": 0, 00:09:16.900 "w_mbytes_per_sec": 0 00:09:16.900 }, 00:09:16.900 "claimed": false, 00:09:16.900 "zoned": false, 00:09:16.900 "supported_io_types": { 00:09:16.900 "read": true, 00:09:16.900 "write": true, 00:09:16.900 "unmap": true, 00:09:16.900 "write_zeroes": true, 00:09:16.900 "flush": true, 00:09:16.900 "reset": true, 00:09:16.900 "compare": true, 00:09:16.900 "compare_and_write": false, 00:09:16.900 "abort": true, 00:09:16.900 "nvme_admin": false, 00:09:16.900 "nvme_io": false 00:09:16.900 }, 00:09:16.900 "driver_specific": { 00:09:16.900 "gpt": { 00:09:16.900 "base_bdev": "Nvme0n1", 00:09:16.900 "offset_blocks": 256, 00:09:16.900 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:16.900 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:16.900 "partition_name": "SPDK_TEST_first" 00:09:16.900 } 00:09:16.900 } 00:09:16.900 } 00:09:16.900 ]' 00:09:16.900 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r length 00:09:17.159 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 1 == \1 ]] 00:09:17.159 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].aliases[0]' 00:09:17.159 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:17.159 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:17.159 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@624 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:17.159 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:17.159 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.159 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:17.159 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.159 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # bdev='[ 00:09:17.159 { 00:09:17.159 "name": "Nvme0n1p2", 00:09:17.159 "aliases": [ 00:09:17.159 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:17.159 ], 00:09:17.159 "product_name": "GPT Disk", 00:09:17.159 "block_size": 4096, 00:09:17.159 "num_blocks": 774143, 00:09:17.159 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:17.159 "md_size": 64, 00:09:17.159 "md_interleave": false, 00:09:17.159 "dif_type": 0, 00:09:17.159 "assigned_rate_limits": { 00:09:17.159 "rw_ios_per_sec": 0, 00:09:17.159 "rw_mbytes_per_sec": 0, 00:09:17.159 "r_mbytes_per_sec": 0, 00:09:17.159 "w_mbytes_per_sec": 0 00:09:17.159 }, 00:09:17.159 "claimed": false, 00:09:17.159 "zoned": false, 00:09:17.159 "supported_io_types": { 00:09:17.159 "read": true, 00:09:17.159 "write": true, 00:09:17.159 "unmap": true, 00:09:17.159 "write_zeroes": true, 00:09:17.159 "flush": true, 00:09:17.159 "reset": true, 00:09:17.159 "compare": true, 00:09:17.159 "compare_and_write": false, 00:09:17.159 "abort": true, 00:09:17.159 "nvme_admin": false, 00:09:17.159 "nvme_io": false 00:09:17.159 }, 00:09:17.159 "driver_specific": { 00:09:17.159 "gpt": { 00:09:17.159 "base_bdev": "Nvme0n1", 00:09:17.159 "offset_blocks": 774400, 00:09:17.159 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:17.159 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:17.159 "partition_name": "SPDK_TEST_second" 00:09:17.159 } 00:09:17.159 } 00:09:17.159 } 00:09:17.159 ]' 00:09:17.159 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r length 00:09:17.159 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ 1 == \1 ]] 00:09:17.159 11:51:15 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].aliases[0]' 00:09:17.159 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:17.159 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:17.431 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@629 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:17.431 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@631 -- # killprocess 80072 00:09:17.431 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@946 -- # '[' -z 80072 ']' 00:09:17.431 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # kill -0 80072 00:09:17.431 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # uname 00:09:17.431 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:17.431 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 80072 00:09:17.431 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:17.431 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:17.431 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 80072' 00:09:17.431 killing process with pid 80072 00:09:17.431 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@965 -- # kill 80072 00:09:17.431 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # wait 80072 00:09:17.689 00:09:17.689 real 0m1.998s 00:09:17.689 user 0m2.196s 00:09:17.689 sys 0m0.435s 00:09:17.689 ************************************ 00:09:17.689 END TEST bdev_gpt_uuid 00:09:17.689 ************************************ 00:09:17.689 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:17.689 11:51:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:17.689 11:51:16 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # [[ gpt == crypto_sw ]] 00:09:17.689 11:51:16 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:09:17.689 11:51:16 blockdev_nvme_gpt -- bdev/blockdev.sh@811 -- # cleanup 00:09:17.689 11:51:16 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:17.689 11:51:16 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:17.689 11:51:16 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:17.689 11:51:16 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:17.689 11:51:16 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:17.689 11:51:16 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:18.282 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:18.541 Waiting for block devices as requested 00:09:18.541 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.799 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.799 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:18.799 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:24.065 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:24.065 11:51:22 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme1n1 ]] 00:09:24.065 11:51:22 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme1n1 00:09:24.065 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:24.065 /dev/nvme1n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:24.065 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:24.065 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:09:24.323 11:51:22 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:24.323 00:09:24.323 real 0m51.321s 00:09:24.323 user 1m3.763s 00:09:24.323 sys 0m10.559s 00:09:24.323 ************************************ 00:09:24.323 END TEST blockdev_nvme_gpt 00:09:24.323 ************************************ 00:09:24.323 11:51:22 blockdev_nvme_gpt -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:24.323 11:51:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:24.323 11:51:22 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:24.323 11:51:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:24.323 11:51:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:24.323 11:51:22 -- common/autotest_common.sh@10 -- # set +x 00:09:24.323 ************************************ 00:09:24.323 START TEST nvme 00:09:24.323 ************************************ 00:09:24.323 11:51:22 nvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:24.323 * Looking for test storage... 00:09:24.323 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:24.323 11:51:23 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:25.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:25.826 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.826 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.826 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.826 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:25.826 11:51:24 nvme -- nvme/nvme.sh@79 -- # uname 00:09:25.826 11:51:24 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:25.826 11:51:24 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:25.826 11:51:24 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:25.826 11:51:24 nvme -- common/autotest_common.sh@1078 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:25.826 11:51:24 nvme -- common/autotest_common.sh@1064 -- # _randomize_va_space=2 00:09:25.826 11:51:24 nvme -- common/autotest_common.sh@1065 -- # echo 0 00:09:25.826 11:51:24 nvme -- common/autotest_common.sh@1067 -- # stubpid=80697 00:09:25.826 Waiting for stub to ready for secondary processes... 00:09:25.826 11:51:24 nvme -- common/autotest_common.sh@1068 -- # echo Waiting for stub to ready for secondary processes... 00:09:25.826 11:51:24 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:25.826 11:51:24 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/80697 ]] 00:09:25.826 11:51:24 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:09:25.826 11:51:24 nvme -- common/autotest_common.sh@1066 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:26.085 [2024-07-21 11:51:24.732577] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:09:26.085 [2024-07-21 11:51:24.732730] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:27.047 11:51:25 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:27.047 11:51:25 nvme -- common/autotest_common.sh@1071 -- # [[ -e /proc/80697 ]] 00:09:27.047 11:51:25 nvme -- common/autotest_common.sh@1072 -- # sleep 1s 00:09:27.047 [2024-07-21 11:51:25.731356] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:27.047 [2024-07-21 11:51:25.767102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.047 [2024-07-21 11:51:25.767154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.047 [2024-07-21 11:51:25.767281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.047 [2024-07-21 11:51:25.779129] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:27.047 [2024-07-21 11:51:25.779187] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:27.047 [2024-07-21 11:51:25.794260] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:27.047 [2024-07-21 11:51:25.794457] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:27.047 [2024-07-21 11:51:25.795384] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:27.047 [2024-07-21 11:51:25.796029] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:27.047 [2024-07-21 11:51:25.796111] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:27.047 [2024-07-21 11:51:25.796629] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:27.047 [2024-07-21 11:51:25.796887] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:27.047 [2024-07-21 11:51:25.797045] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:27.047 [2024-07-21 11:51:25.798032] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:27.047 [2024-07-21 11:51:25.798428] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:27.047 [2024-07-21 11:51:25.798595] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:27.047 [2024-07-21 11:51:25.798783] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:27.047 [2024-07-21 11:51:25.798923] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:27.985 done. 00:09:27.985 11:51:26 nvme -- common/autotest_common.sh@1069 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:27.985 11:51:26 nvme -- common/autotest_common.sh@1074 -- # echo done. 00:09:27.985 11:51:26 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:27.985 11:51:26 nvme -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:09:27.985 11:51:26 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:27.985 11:51:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:27.985 ************************************ 00:09:27.985 START TEST nvme_reset 00:09:27.985 ************************************ 00:09:27.985 11:51:26 nvme.nvme_reset -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:28.244 Initializing NVMe Controllers 00:09:28.244 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:28.244 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:28.244 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:28.244 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:28.244 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:28.244 00:09:28.244 real 0m0.252s 00:09:28.244 user 0m0.085s 00:09:28.244 sys 0m0.121s 00:09:28.244 11:51:26 nvme.nvme_reset -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:28.244 11:51:26 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:28.244 ************************************ 00:09:28.244 END TEST nvme_reset 00:09:28.244 ************************************ 00:09:28.244 11:51:27 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:28.244 11:51:27 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:28.244 11:51:27 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:28.244 11:51:27 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:28.245 ************************************ 00:09:28.245 START TEST nvme_identify 00:09:28.245 ************************************ 00:09:28.245 11:51:27 nvme.nvme_identify -- common/autotest_common.sh@1121 -- # nvme_identify 00:09:28.245 11:51:27 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:28.245 11:51:27 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:28.245 11:51:27 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:28.245 11:51:27 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:28.245 11:51:27 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:28.245 11:51:27 nvme.nvme_identify -- common/autotest_common.sh@1509 -- # local bdfs 00:09:28.245 11:51:27 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:28.245 11:51:27 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:28.245 11:51:27 nvme.nvme_identify -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:09:28.505 11:51:27 nvme.nvme_identify -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:09:28.506 11:51:27 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:28.506 11:51:27 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:28.506 [2024-07-21 11:51:27.357987] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 80730 terminated unexpected 00:09:28.506 ===================================================== 00:09:28.506 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:28.506 ===================================================== 00:09:28.506 Controller Capabilities/Features 00:09:28.506 ================================ 00:09:28.506 Vendor ID: 1b36 00:09:28.506 Subsystem Vendor ID: 1af4 00:09:28.506 Serial Number: 12340 00:09:28.506 Model Number: QEMU NVMe Ctrl 00:09:28.506 Firmware Version: 8.0.0 00:09:28.506 Recommended Arb Burst: 6 00:09:28.506 IEEE OUI Identifier: 00 54 52 00:09:28.506 Multi-path I/O 00:09:28.506 May have multiple subsystem ports: No 00:09:28.506 May have multiple controllers: No 00:09:28.506 Associated with SR-IOV VF: No 00:09:28.506 Max Data Transfer Size: 524288 00:09:28.506 Max Number of Namespaces: 256 00:09:28.506 Max Number of I/O Queues: 64 00:09:28.506 NVMe Specification Version (VS): 1.4 00:09:28.506 NVMe Specification Version (Identify): 1.4 00:09:28.506 Maximum Queue Entries: 2048 00:09:28.506 Contiguous Queues Required: Yes 00:09:28.506 Arbitration Mechanisms Supported 00:09:28.506 Weighted Round Robin: Not Supported 00:09:28.506 Vendor Specific: Not Supported 00:09:28.506 Reset Timeout: 7500 ms 00:09:28.506 Doorbell Stride: 4 bytes 00:09:28.506 NVM Subsystem Reset: Not Supported 00:09:28.506 Command Sets Supported 00:09:28.506 NVM Command Set: Supported 00:09:28.506 Boot Partition: Not Supported 00:09:28.506 Memory Page Size Minimum: 4096 bytes 00:09:28.506 Memory Page Size Maximum: 65536 bytes 00:09:28.506 Persistent Memory Region: Not Supported 00:09:28.506 Optional Asynchronous Events Supported 00:09:28.506 Namespace Attribute Notices: Supported 00:09:28.506 Firmware Activation Notices: Not Supported 00:09:28.506 ANA Change Notices: Not Supported 00:09:28.506 PLE Aggregate Log Change Notices: Not Supported 00:09:28.506 LBA Status Info Alert Notices: Not Supported 00:09:28.506 EGE Aggregate Log Change Notices: Not Supported 00:09:28.506 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.506 Zone Descriptor Change Notices: Not Supported 00:09:28.506 Discovery Log Change Notices: Not Supported 00:09:28.506 Controller Attributes 00:09:28.506 128-bit Host Identifier: Not Supported 00:09:28.506 Non-Operational Permissive Mode: Not Supported 00:09:28.506 NVM Sets: Not Supported 00:09:28.506 Read Recovery Levels: Not Supported 00:09:28.506 Endurance Groups: Not Supported 00:09:28.506 Predictable Latency Mode: Not Supported 00:09:28.506 Traffic Based Keep ALive: Not Supported 00:09:28.506 Namespace Granularity: Not Supported 00:09:28.506 SQ Associations: Not Supported 00:09:28.506 UUID List: Not Supported 00:09:28.506 Multi-Domain Subsystem: Not Supported 00:09:28.506 Fixed Capacity Management: Not Supported 00:09:28.506 Variable Capacity Management: Not Supported 00:09:28.506 Delete Endurance Group: Not Supported 00:09:28.506 Delete NVM Set: Not Supported 00:09:28.506 Extended LBA Formats Supported: Supported 00:09:28.506 Flexible Data Placement Supported: Not Supported 00:09:28.506 00:09:28.506 Controller Memory Buffer Support 00:09:28.506 ================================ 00:09:28.506 Supported: No 00:09:28.506 00:09:28.506 Persistent Memory Region Support 00:09:28.506 ================================ 00:09:28.506 Supported: No 00:09:28.506 00:09:28.506 Admin Command Set Attributes 00:09:28.506 ============================ 00:09:28.506 Security Send/Receive: Not Supported 00:09:28.506 Format NVM: Supported 00:09:28.506 Firmware Activate/Download: Not Supported 00:09:28.506 Namespace Management: Supported 00:09:28.506 Device Self-Test: Not Supported 00:09:28.506 Directives: Supported 00:09:28.506 NVMe-MI: Not Supported 00:09:28.506 Virtualization Management: Not Supported 00:09:28.506 Doorbell Buffer Config: Supported 00:09:28.506 Get LBA Status Capability: Not Supported 00:09:28.506 Command & Feature Lockdown Capability: Not Supported 00:09:28.506 Abort Command Limit: 4 00:09:28.506 Async Event Request Limit: 4 00:09:28.506 Number of Firmware Slots: N/A 00:09:28.506 Firmware Slot 1 Read-Only: N/A 00:09:28.506 Firmware Activation Without Reset: N/A 00:09:28.506 Multiple Update Detection Support: N/A 00:09:28.506 Firmware Update Granularity: No Information Provided 00:09:28.506 Per-Namespace SMART Log: Yes 00:09:28.506 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.506 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:28.506 Command Effects Log Page: Supported 00:09:28.506 Get Log Page Extended Data: Supported 00:09:28.506 Telemetry Log Pages: Not Supported 00:09:28.506 Persistent Event Log Pages: Not Supported 00:09:28.506 Supported Log Pages Log Page: May Support 00:09:28.506 Commands Supported & Effects Log Page: Not Supported 00:09:28.506 Feature Identifiers & Effects Log Page:May Support 00:09:28.506 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.506 Data Area 4 for Telemetry Log: Not Supported 00:09:28.506 Error Log Page Entries Supported: 1 00:09:28.506 Keep Alive: Not Supported 00:09:28.506 00:09:28.506 NVM Command Set Attributes 00:09:28.506 ========================== 00:09:28.506 Submission Queue Entry Size 00:09:28.506 Max: 64 00:09:28.506 Min: 64 00:09:28.506 Completion Queue Entry Size 00:09:28.506 Max: 16 00:09:28.506 Min: 16 00:09:28.506 Number of Namespaces: 256 00:09:28.506 Compare Command: Supported 00:09:28.506 Write Uncorrectable Command: Not Supported 00:09:28.506 Dataset Management Command: Supported 00:09:28.506 Write Zeroes Command: Supported 00:09:28.506 Set Features Save Field: Supported 00:09:28.506 Reservations: Not Supported 00:09:28.506 Timestamp: Supported 00:09:28.506 Copy: Supported 00:09:28.506 Volatile Write Cache: Present 00:09:28.506 Atomic Write Unit (Normal): 1 00:09:28.506 Atomic Write Unit (PFail): 1 00:09:28.506 Atomic Compare & Write Unit: 1 00:09:28.506 Fused Compare & Write: Not Supported 00:09:28.506 Scatter-Gather List 00:09:28.506 SGL Command Set: Supported 00:09:28.506 SGL Keyed: Not Supported 00:09:28.506 SGL Bit Bucket Descriptor: Not Supported 00:09:28.506 SGL Metadata Pointer: Not Supported 00:09:28.506 Oversized SGL: Not Supported 00:09:28.506 SGL Metadata Address: Not Supported 00:09:28.506 SGL Offset: Not Supported 00:09:28.506 Transport SGL Data Block: Not Supported 00:09:28.506 Replay Protected Memory Block: Not Supported 00:09:28.506 00:09:28.506 Firmware Slot Information 00:09:28.506 ========================= 00:09:28.506 Active slot: 1 00:09:28.506 Slot 1 Firmware Revision: 1.0 00:09:28.506 00:09:28.506 00:09:28.506 Commands Supported and Effects 00:09:28.506 ============================== 00:09:28.506 Admin Commands 00:09:28.506 -------------- 00:09:28.506 Delete I/O Submission Queue (00h): Supported 00:09:28.506 Create I/O Submission Queue (01h): Supported 00:09:28.506 Get Log Page (02h): Supported 00:09:28.506 Delete I/O Completion Queue (04h): Supported 00:09:28.506 Create I/O Completion Queue (05h): Supported 00:09:28.506 Identify (06h): Supported 00:09:28.506 Abort (08h): Supported 00:09:28.506 Set Features (09h): Supported 00:09:28.506 Get Features (0Ah): Supported 00:09:28.506 Asynchronous Event Request (0Ch): Supported 00:09:28.506 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.506 Directive Send (19h): Supported 00:09:28.506 Directive Receive (1Ah): Supported 00:09:28.506 Virtualization Management (1Ch): Supported 00:09:28.506 Doorbell Buffer Config (7Ch): Supported 00:09:28.506 Format NVM (80h): Supported LBA-Change 00:09:28.506 I/O Commands 00:09:28.506 ------------ 00:09:28.506 Flush (00h): Supported LBA-Change 00:09:28.506 Write (01h): Supported LBA-Change 00:09:28.506 Read (02h): Supported 00:09:28.506 Compare (05h): Supported 00:09:28.506 Write Zeroes (08h): Supported LBA-Change 00:09:28.506 Dataset Management (09h): Supported LBA-Change 00:09:28.506 Unknown (0Ch): Supported 00:09:28.506 Unknown (12h): Supported 00:09:28.506 Copy (19h): Supported LBA-Change 00:09:28.506 Unknown (1Dh): Supported LBA-Change 00:09:28.506 00:09:28.506 Error Log 00:09:28.506 ========= 00:09:28.506 00:09:28.506 Arbitration 00:09:28.506 =========== 00:09:28.506 Arbitration Burst: no limit 00:09:28.506 00:09:28.506 Power Management 00:09:28.506 ================ 00:09:28.506 Number of Power States: 1 00:09:28.506 Current Power State: Power State #0 00:09:28.506 Power State #0: 00:09:28.506 Max Power: 25.00 W 00:09:28.506 Non-Operational State: Operational 00:09:28.506 Entry Latency: 16 microseconds 00:09:28.506 Exit Latency: 4 microseconds 00:09:28.506 Relative Read Throughput: 0 00:09:28.506 Relative Read Latency: 0 00:09:28.506 Relative Write Throughput: 0 00:09:28.506 Relative Write Latency: 0 00:09:28.506 Idle Power[2024-07-21 11:51:27.359007] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 80730 terminated unexpected 00:09:28.506 : Not Reported 00:09:28.506 Active Power: Not Reported 00:09:28.506 Non-Operational Permissive Mode: Not Supported 00:09:28.506 00:09:28.506 Health Information 00:09:28.506 ================== 00:09:28.506 Critical Warnings: 00:09:28.506 Available Spare Space: OK 00:09:28.506 Temperature: OK 00:09:28.506 Device Reliability: OK 00:09:28.506 Read Only: No 00:09:28.506 Volatile Memory Backup: OK 00:09:28.506 Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.506 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:28.506 Available Spare: 0% 00:09:28.506 Available Spare Threshold: 0% 00:09:28.506 Life Percentage Used: 0% 00:09:28.506 Data Units Read: 1079 00:09:28.506 Data Units Written: 907 00:09:28.506 Host Read Commands: 50350 00:09:28.506 Host Write Commands: 48793 00:09:28.506 Controller Busy Time: 0 minutes 00:09:28.506 Power Cycles: 0 00:09:28.506 Power On Hours: 0 hours 00:09:28.506 Unsafe Shutdowns: 0 00:09:28.506 Unrecoverable Media Errors: 0 00:09:28.506 Lifetime Error Log Entries: 0 00:09:28.506 Warning Temperature Time: 0 minutes 00:09:28.506 Critical Temperature Time: 0 minutes 00:09:28.506 00:09:28.506 Number of Queues 00:09:28.506 ================ 00:09:28.506 Number of I/O Submission Queues: 64 00:09:28.506 Number of I/O Completion Queues: 64 00:09:28.506 00:09:28.506 ZNS Specific Controller Data 00:09:28.506 ============================ 00:09:28.506 Zone Append Size Limit: 0 00:09:28.506 00:09:28.506 00:09:28.506 Active Namespaces 00:09:28.506 ================= 00:09:28.506 Namespace ID:1 00:09:28.506 Error Recovery Timeout: Unlimited 00:09:28.506 Command Set Identifier: NVM (00h) 00:09:28.506 Deallocate: Supported 00:09:28.506 Deallocated/Unwritten Error: Supported 00:09:28.506 Deallocated Read Value: All 0x00 00:09:28.506 Deallocate in Write Zeroes: Not Supported 00:09:28.506 Deallocated Guard Field: 0xFFFF 00:09:28.506 Flush: Supported 00:09:28.506 Reservation: Not Supported 00:09:28.506 Metadata Transferred as: Separate Metadata Buffer 00:09:28.506 Namespace Sharing Capabilities: Private 00:09:28.506 Size (in LBAs): 1548666 (5GiB) 00:09:28.506 Capacity (in LBAs): 1548666 (5GiB) 00:09:28.506 Utilization (in LBAs): 1548666 (5GiB) 00:09:28.506 Thin Provisioning: Not Supported 00:09:28.506 Per-NS Atomic Units: No 00:09:28.506 Maximum Single Source Range Length: 128 00:09:28.506 Maximum Copy Length: 128 00:09:28.506 Maximum Source Range Count: 128 00:09:28.506 NGUID/EUI64 Never Reused: No 00:09:28.506 Namespace Write Protected: No 00:09:28.506 Number of LBA Formats: 8 00:09:28.506 Current LBA Format: LBA Format #07 00:09:28.506 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.506 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.506 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.506 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.506 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.506 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.506 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.506 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.506 00:09:28.506 ===================================================== 00:09:28.506 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:28.506 ===================================================== 00:09:28.506 Controller Capabilities/Features 00:09:28.506 ================================ 00:09:28.506 Vendor ID: 1b36 00:09:28.506 Subsystem Vendor ID: 1af4 00:09:28.506 Serial Number: 12341 00:09:28.506 Model Number: QEMU NVMe Ctrl 00:09:28.506 Firmware Version: 8.0.0 00:09:28.506 Recommended Arb Burst: 6 00:09:28.506 IEEE OUI Identifier: 00 54 52 00:09:28.506 Multi-path I/O 00:09:28.506 May have multiple subsystem ports: No 00:09:28.506 May have multiple controllers: No 00:09:28.506 Associated with SR-IOV VF: No 00:09:28.506 Max Data Transfer Size: 524288 00:09:28.506 Max Number of Namespaces: 256 00:09:28.506 Max Number of I/O Queues: 64 00:09:28.506 NVMe Specification Version (VS): 1.4 00:09:28.506 NVMe Specification Version (Identify): 1.4 00:09:28.506 Maximum Queue Entries: 2048 00:09:28.506 Contiguous Queues Required: Yes 00:09:28.506 Arbitration Mechanisms Supported 00:09:28.506 Weighted Round Robin: Not Supported 00:09:28.506 Vendor Specific: Not Supported 00:09:28.506 Reset Timeout: 7500 ms 00:09:28.506 Doorbell Stride: 4 bytes 00:09:28.506 NVM Subsystem Reset: Not Supported 00:09:28.506 Command Sets Supported 00:09:28.506 NVM Command Set: Supported 00:09:28.506 Boot Partition: Not Supported 00:09:28.506 Memory Page Size Minimum: 4096 bytes 00:09:28.506 Memory Page Size Maximum: 65536 bytes 00:09:28.506 Persistent Memory Region: Not Supported 00:09:28.506 Optional Asynchronous Events Supported 00:09:28.506 Namespace Attribute Notices: Supported 00:09:28.506 Firmware Activation Notices: Not Supported 00:09:28.506 ANA Change Notices: Not Supported 00:09:28.506 PLE Aggregate Log Change Notices: Not Supported 00:09:28.506 LBA Status Info Alert Notices: Not Supported 00:09:28.506 EGE Aggregate Log Change Notices: Not Supported 00:09:28.506 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.506 Zone Descriptor Change Notices: Not Supported 00:09:28.506 Discovery Log Change Notices: Not Supported 00:09:28.506 Controller Attributes 00:09:28.506 128-bit Host Identifier: Not Supported 00:09:28.506 Non-Operational Permissive Mode: Not Supported 00:09:28.506 NVM Sets: Not Supported 00:09:28.506 Read Recovery Levels: Not Supported 00:09:28.506 Endurance Groups: Not Supported 00:09:28.506 Predictable Latency Mode: Not Supported 00:09:28.506 Traffic Based Keep ALive: Not Supported 00:09:28.506 Namespace Granularity: Not Supported 00:09:28.506 SQ Associations: Not Supported 00:09:28.506 UUID List: Not Supported 00:09:28.506 Multi-Domain Subsystem: Not Supported 00:09:28.506 Fixed Capacity Management: Not Supported 00:09:28.506 Variable Capacity Management: Not Supported 00:09:28.506 Delete Endurance Group: Not Supported 00:09:28.506 Delete NVM Set: Not Supported 00:09:28.506 Extended LBA Formats Supported: Supported 00:09:28.506 Flexible Data Placement Supported: Not Supported 00:09:28.506 00:09:28.506 Controller Memory Buffer Support 00:09:28.506 ================================ 00:09:28.506 Supported: No 00:09:28.506 00:09:28.506 Persistent Memory Region Support 00:09:28.506 ================================ 00:09:28.506 Supported: No 00:09:28.506 00:09:28.506 Admin Command Set Attributes 00:09:28.506 ============================ 00:09:28.506 Security Send/Receive: Not Supported 00:09:28.506 Format NVM: Supported 00:09:28.506 Firmware Activate/Download: Not Supported 00:09:28.506 Namespace Management: Supported 00:09:28.506 Device Self-Test: Not Supported 00:09:28.506 Directives: Supported 00:09:28.506 NVMe-MI: Not Supported 00:09:28.506 Virtualization Management: Not Supported 00:09:28.506 Doorbell Buffer Config: Supported 00:09:28.506 Get LBA Status Capability: Not Supported 00:09:28.506 Command & Feature Lockdown Capability: Not Supported 00:09:28.506 Abort Command Limit: 4 00:09:28.506 Async Event Request Limit: 4 00:09:28.506 Number of Firmware Slots: N/A 00:09:28.506 Firmware Slot 1 Read-Only: N/A 00:09:28.506 Firmware Activation Without Reset: N/A 00:09:28.506 Multiple Update Detection Support: N/A 00:09:28.506 Firmware Update Granularity: No Information Provided 00:09:28.506 Per-Namespace SMART Log: Yes 00:09:28.506 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.506 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:28.506 Command Effects Log Page: Supported 00:09:28.506 Get Log Page Extended Data: Supported 00:09:28.506 Telemetry Log Pages: Not Supported 00:09:28.506 Persistent Event Log Pages: Not Supported 00:09:28.506 Supported Log Pages Log Page: May Support 00:09:28.506 Commands Supported & Effects Log Page: Not Supported 00:09:28.506 Feature Identifiers & Effects Log Page:May Support 00:09:28.506 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.506 Data Area 4 for Telemetry Log: Not Supported 00:09:28.506 Error Log Page Entries Supported: 1 00:09:28.506 Keep Alive: Not Supported 00:09:28.506 00:09:28.506 NVM Command Set Attributes 00:09:28.506 ========================== 00:09:28.506 Submission Queue Entry Size 00:09:28.506 Max: 64 00:09:28.506 Min: 64 00:09:28.506 Completion Queue Entry Size 00:09:28.506 Max: 16 00:09:28.506 Min: 16 00:09:28.506 Number of Namespaces: 256 00:09:28.506 Compare Command: Supported 00:09:28.506 Write Uncorrectable Command: Not Supported 00:09:28.507 Dataset Management Command: Supported 00:09:28.507 Write Zeroes Command: Supported 00:09:28.507 Set Features Save Field: Supported 00:09:28.507 Reservations: Not Supported 00:09:28.507 Timestamp: Supported 00:09:28.507 Copy: Supported 00:09:28.507 Volatile Write Cache: Present 00:09:28.507 Atomic Write Unit (Normal): 1 00:09:28.507 Atomic Write Unit (PFail): 1 00:09:28.507 Atomic Compare & Write Unit: 1 00:09:28.507 Fused Compare & Write: Not Supported 00:09:28.507 Scatter-Gather List 00:09:28.507 SGL Command Set: Supported 00:09:28.507 SGL Keyed: Not Supported 00:09:28.507 SGL Bit Bucket Descriptor: Not Supported 00:09:28.507 SGL Metadata Pointer: Not Supported 00:09:28.507 Oversized SGL: Not Supported 00:09:28.507 SGL Metadata Address: Not Supported 00:09:28.507 SGL Offset: Not Supported 00:09:28.507 Transport SGL Data Block: Not Supported 00:09:28.507 Replay Protected Memory Block: Not Supported 00:09:28.507 00:09:28.507 Firmware Slot Information 00:09:28.507 ========================= 00:09:28.507 Active slot: 1 00:09:28.507 Slot 1 Firmware Revision: 1.0 00:09:28.507 00:09:28.507 00:09:28.507 Commands Supported and Effects 00:09:28.507 ============================== 00:09:28.507 Admin Commands 00:09:28.507 -------------- 00:09:28.507 Delete I/O Submission Queue (00h): Supported 00:09:28.507 Create I/O Submission Queue (01h): Supported 00:09:28.507 Get Log Page (02h): Supported 00:09:28.507 Delete I/O Completion Queue (04h): Supported 00:09:28.507 Create I/O Completion Queue (05h): Supported 00:09:28.507 Identify (06h): Supported 00:09:28.507 Abort (08h): Supported 00:09:28.507 Set Features (09h): Supported 00:09:28.507 Get Features (0Ah): Supported 00:09:28.507 Asynchronous Event Request (0Ch): Supported 00:09:28.507 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.507 Directive Send (19h): Supported 00:09:28.507 Directive Receive (1Ah): Supported 00:09:28.507 Virtualization Management (1Ch): Supported 00:09:28.507 Doorbell Buffer Config (7Ch): Supported 00:09:28.507 Format NVM (80h): Supported LBA-Change 00:09:28.507 I/O Commands 00:09:28.507 ------------ 00:09:28.507 Flush (00h): Supported LBA-Change 00:09:28.507 Write (01h): Supported LBA-Change 00:09:28.507 Read (02h): Supported 00:09:28.507 Compare (05h): Supported 00:09:28.507 Write Zeroes (08h): Supported LBA-Change 00:09:28.507 Dataset Management (09h): Supported LBA-Change 00:09:28.507 Unknown (0Ch): Supported 00:09:28.507 Unknown (12h): Supported 00:09:28.507 Copy (19h): Supported LBA-Change 00:09:28.507 Unknown (1Dh): Supported LBA-Change 00:09:28.507 00:09:28.507 Error Log 00:09:28.507 ========= 00:09:28.507 00:09:28.507 Arbitration 00:09:28.507 =========== 00:09:28.507 Arbitration Burst: no limit 00:09:28.507 00:09:28.507 Power Management 00:09:28.507 ================ 00:09:28.507 Number of Power States: 1 00:09:28.507 Current Power State: Power State #0 00:09:28.507 Power State #0: 00:09:28.507 Max Power: 25.00 W 00:09:28.507 Non-Operational State: Operational 00:09:28.507 Entry Latency: 16 microseconds 00:09:28.507 Exit Latency: 4 microseconds 00:09:28.507 Relative Read Throughput: 0 00:09:28.507 Relative Read Latency: 0 00:09:28.507 Relative Write Throughput: 0 00:09:28.507 Relative Write Latency: 0 00:09:28.507 Idle Power: Not Reported 00:09:28.507 Active Power: Not Reported 00:09:28.507 Non-Operational Permissive Mode: Not Supported 00:09:28.507 00:09:28.507 Health Information 00:09:28.507 ================== 00:09:28.507 Critical Warnings: 00:09:28.507 Available Spare Space: OK 00:09:28.507 Temperature: OK 00:09:28.507 Device Reliability: OK 00:09:28.507 Read Only: No 00:09:28.507 Volatile Memory Backup: OK 00:09:28.507 Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.507 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:28.507 Available Spare: 0% 00:09:28.507 Available Spare Threshold: 0% 00:09:28.507 Life Percentage Used: 0% 00:09:28.507 Data Units Read: 807 00:09:28.507 Data Units Written: 657 00:09:28.507 Host Read Commands: 36335 00:09:28.507 Host Write Commands: 34106 00:09:28.507 Controller Busy Time: 0 minutes 00:09:28.507 Power Cycles: 0 00:09:28.507 Power On Hours: 0 hours 00:09:28.507 Unsafe Shutdowns: 0 00:09:28.507 Unrecoverable Media Errors: 0 00:09:28.507 Lifetime Error Log Entries: 0 00:09:28.507 Warning Temperature Time: 0 minutes 00:09:28.507 Critical Temperature Time: 0 minutes 00:09:28.507 00:09:28.507 Number of Queues 00:09:28.507 ================ 00:09:28.507 Number of I/O Submission Queues: 64 00:09:28.507 Number of I/O Completion Queues: 64 00:09:28.507 00:09:28.507 ZNS Specific Controller Data 00:09:28.507 ============================ 00:09:28.507 Zone Append Size Limit: 0 00:09:28.507 00:09:28.507 00:09:28.507 Active Namespaces 00:09:28.507 ================= 00:09:28.507 Namespace ID:1 00:09:28.507 Error Recovery Timeout: Unlimited 00:09:28.507 Command Set Identifier: [2024-07-21 11:51:27.359618] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 80730 terminated unexpected 00:09:28.507 NVM (00h) 00:09:28.507 Deallocate: Supported 00:09:28.507 Deallocated/Unwritten Error: Supported 00:09:28.507 Deallocated Read Value: All 0x00 00:09:28.507 Deallocate in Write Zeroes: Not Supported 00:09:28.507 Deallocated Guard Field: 0xFFFF 00:09:28.507 Flush: Supported 00:09:28.507 Reservation: Not Supported 00:09:28.507 Namespace Sharing Capabilities: Private 00:09:28.507 Size (in LBAs): 1310720 (5GiB) 00:09:28.507 Capacity (in LBAs): 1310720 (5GiB) 00:09:28.507 Utilization (in LBAs): 1310720 (5GiB) 00:09:28.507 Thin Provisioning: Not Supported 00:09:28.507 Per-NS Atomic Units: No 00:09:28.507 Maximum Single Source Range Length: 128 00:09:28.507 Maximum Copy Length: 128 00:09:28.507 Maximum Source Range Count: 128 00:09:28.507 NGUID/EUI64 Never Reused: No 00:09:28.507 Namespace Write Protected: No 00:09:28.507 Number of LBA Formats: 8 00:09:28.507 Current LBA Format: LBA Format #04 00:09:28.507 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.507 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.507 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.507 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.507 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.507 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.507 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.507 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.507 00:09:28.507 ===================================================== 00:09:28.507 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:28.507 ===================================================== 00:09:28.507 Controller Capabilities/Features 00:09:28.507 ================================ 00:09:28.507 Vendor ID: 1b36 00:09:28.507 Subsystem Vendor ID: 1af4 00:09:28.507 Serial Number: 12343 00:09:28.507 Model Number: QEMU NVMe Ctrl 00:09:28.507 Firmware Version: 8.0.0 00:09:28.507 Recommended Arb Burst: 6 00:09:28.507 IEEE OUI Identifier: 00 54 52 00:09:28.507 Multi-path I/O 00:09:28.507 May have multiple subsystem ports: No 00:09:28.507 May have multiple controllers: Yes 00:09:28.507 Associated with SR-IOV VF: No 00:09:28.507 Max Data Transfer Size: 524288 00:09:28.507 Max Number of Namespaces: 256 00:09:28.507 Max Number of I/O Queues: 64 00:09:28.507 NVMe Specification Version (VS): 1.4 00:09:28.507 NVMe Specification Version (Identify): 1.4 00:09:28.507 Maximum Queue Entries: 2048 00:09:28.507 Contiguous Queues Required: Yes 00:09:28.507 Arbitration Mechanisms Supported 00:09:28.507 Weighted Round Robin: Not Supported 00:09:28.507 Vendor Specific: Not Supported 00:09:28.507 Reset Timeout: 7500 ms 00:09:28.507 Doorbell Stride: 4 bytes 00:09:28.507 NVM Subsystem Reset: Not Supported 00:09:28.507 Command Sets Supported 00:09:28.507 NVM Command Set: Supported 00:09:28.507 Boot Partition: Not Supported 00:09:28.507 Memory Page Size Minimum: 4096 bytes 00:09:28.507 Memory Page Size Maximum: 65536 bytes 00:09:28.507 Persistent Memory Region: Not Supported 00:09:28.507 Optional Asynchronous Events Supported 00:09:28.507 Namespace Attribute Notices: Supported 00:09:28.507 Firmware Activation Notices: Not Supported 00:09:28.507 ANA Change Notices: Not Supported 00:09:28.507 PLE Aggregate Log Change Notices: Not Supported 00:09:28.507 LBA Status Info Alert Notices: Not Supported 00:09:28.507 EGE Aggregate Log Change Notices: Not Supported 00:09:28.507 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.507 Zone Descriptor Change Notices: Not Supported 00:09:28.507 Discovery Log Change Notices: Not Supported 00:09:28.507 Controller Attributes 00:09:28.507 128-bit Host Identifier: Not Supported 00:09:28.507 Non-Operational Permissive Mode: Not Supported 00:09:28.507 NVM Sets: Not Supported 00:09:28.507 Read Recovery Levels: Not Supported 00:09:28.507 Endurance Groups: Supported 00:09:28.507 Predictable Latency Mode: Not Supported 00:09:28.507 Traffic Based Keep ALive: Not Supported 00:09:28.507 Namespace Granularity: Not Supported 00:09:28.507 SQ Associations: Not Supported 00:09:28.507 UUID List: Not Supported 00:09:28.507 Multi-Domain Subsystem: Not Supported 00:09:28.507 Fixed Capacity Management: Not Supported 00:09:28.507 Variable Capacity Management: Not Supported 00:09:28.507 Delete Endurance Group: Not Supported 00:09:28.507 Delete NVM Set: Not Supported 00:09:28.507 Extended LBA Formats Supported: Supported 00:09:28.507 Flexible Data Placement Supported: Supported 00:09:28.507 00:09:28.507 Controller Memory Buffer Support 00:09:28.507 ================================ 00:09:28.507 Supported: No 00:09:28.507 00:09:28.507 Persistent Memory Region Support 00:09:28.507 ================================ 00:09:28.507 Supported: No 00:09:28.507 00:09:28.507 Admin Command Set Attributes 00:09:28.507 ============================ 00:09:28.507 Security Send/Receive: Not Supported 00:09:28.507 Format NVM: Supported 00:09:28.507 Firmware Activate/Download: Not Supported 00:09:28.507 Namespace Management: Supported 00:09:28.507 Device Self-Test: Not Supported 00:09:28.507 Directives: Supported 00:09:28.507 NVMe-MI: Not Supported 00:09:28.507 Virtualization Management: Not Supported 00:09:28.507 Doorbell Buffer Config: Supported 00:09:28.507 Get LBA Status Capability: Not Supported 00:09:28.507 Command & Feature Lockdown Capability: Not Supported 00:09:28.507 Abort Command Limit: 4 00:09:28.507 Async Event Request Limit: 4 00:09:28.507 Number of Firmware Slots: N/A 00:09:28.507 Firmware Slot 1 Read-Only: N/A 00:09:28.507 Firmware Activation Without Reset: N/A 00:09:28.507 Multiple Update Detection Support: N/A 00:09:28.507 Firmware Update Granularity: No Information Provided 00:09:28.507 Per-Namespace SMART Log: Yes 00:09:28.507 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.507 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:28.507 Command Effects Log Page: Supported 00:09:28.507 Get Log Page Extended Data: Supported 00:09:28.507 Telemetry Log Pages: Not Supported 00:09:28.507 Persistent Event Log Pages: Not Supported 00:09:28.507 Supported Log Pages Log Page: May Support 00:09:28.507 Commands Supported & Effects Log Page: Not Supported 00:09:28.507 Feature Identifiers & Effects Log Page:May Support 00:09:28.507 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.507 Data Area 4 for Telemetry Log: Not Supported 00:09:28.507 Error Log Page Entries Supported: 1 00:09:28.507 Keep Alive: Not Supported 00:09:28.507 00:09:28.507 NVM Command Set Attributes 00:09:28.507 ========================== 00:09:28.507 Submission Queue Entry Size 00:09:28.507 Max: 64 00:09:28.507 Min: 64 00:09:28.507 Completion Queue Entry Size 00:09:28.507 Max: 16 00:09:28.507 Min: 16 00:09:28.507 Number of Namespaces: 256 00:09:28.507 Compare Command: Supported 00:09:28.507 Write Uncorrectable Command: Not Supported 00:09:28.507 Dataset Management Command: Supported 00:09:28.507 Write Zeroes Command: Supported 00:09:28.507 Set Features Save Field: Supported 00:09:28.507 Reservations: Not Supported 00:09:28.507 Timestamp: Supported 00:09:28.507 Copy: Supported 00:09:28.507 Volatile Write Cache: Present 00:09:28.507 Atomic Write Unit (Normal): 1 00:09:28.507 Atomic Write Unit (PFail): 1 00:09:28.507 Atomic Compare & Write Unit: 1 00:09:28.507 Fused Compare & Write: Not Supported 00:09:28.507 Scatter-Gather List 00:09:28.507 SGL Command Set: Supported 00:09:28.507 SGL Keyed: Not Supported 00:09:28.507 SGL Bit Bucket Descriptor: Not Supported 00:09:28.507 SGL Metadata Pointer: Not Supported 00:09:28.507 Oversized SGL: Not Supported 00:09:28.507 SGL Metadata Address: Not Supported 00:09:28.507 SGL Offset: Not Supported 00:09:28.507 Transport SGL Data Block: Not Supported 00:09:28.507 Replay Protected Memory Block: Not Supported 00:09:28.507 00:09:28.507 Firmware Slot Information 00:09:28.507 ========================= 00:09:28.507 Active slot: 1 00:09:28.507 Slot 1 Firmware Revision: 1.0 00:09:28.507 00:09:28.507 00:09:28.507 Commands Supported and Effects 00:09:28.507 ============================== 00:09:28.507 Admin Commands 00:09:28.507 -------------- 00:09:28.507 Delete I/O Submission Queue (00h): Supported 00:09:28.507 Create I/O Submission Queue (01h): Supported 00:09:28.507 Get Log Page (02h): Supported 00:09:28.507 Delete I/O Completion Queue (04h): Supported 00:09:28.507 Create I/O Completion Queue (05h): Supported 00:09:28.507 Identify (06h): Supported 00:09:28.507 Abort (08h): Supported 00:09:28.507 Set Features (09h): Supported 00:09:28.507 Get Features (0Ah): Supported 00:09:28.507 Asynchronous Event Request (0Ch): Supported 00:09:28.507 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.507 Directive Send (19h): Supported 00:09:28.507 Directive Receive (1Ah): Supported 00:09:28.507 Virtualization Management (1Ch): Supported 00:09:28.507 Doorbell Buffer Config (7Ch): Supported 00:09:28.507 Format NVM (80h): Supported LBA-Change 00:09:28.507 I/O Commands 00:09:28.507 ------------ 00:09:28.507 Flush (00h): Supported LBA-Change 00:09:28.507 Write (01h): Supported LBA-Change 00:09:28.507 Read (02h): Supported 00:09:28.507 Compare (05h): Supported 00:09:28.507 Write Zeroes (08h): Supported LBA-Change 00:09:28.507 Dataset Management (09h): Supported LBA-Change 00:09:28.507 Unknown (0Ch): Supported 00:09:28.507 Unknown (12h): Supported 00:09:28.507 Copy (19h): Supported LBA-Change 00:09:28.507 Unknown (1Dh): Supported LBA-Change 00:09:28.507 00:09:28.507 Error Log 00:09:28.507 ========= 00:09:28.507 00:09:28.507 Arbitration 00:09:28.507 =========== 00:09:28.507 Arbitration Burst: no limit 00:09:28.507 00:09:28.507 Power Management 00:09:28.507 ================ 00:09:28.507 Number of Power States: 1 00:09:28.507 Current Power State: Power State #0 00:09:28.507 Power State #0: 00:09:28.507 Max Power: 25.00 W 00:09:28.507 Non-Operational State: Operational 00:09:28.507 Entry Latency: 16 microseconds 00:09:28.507 Exit Latency: 4 microseconds 00:09:28.507 Relative Read Throughput: 0 00:09:28.507 Relative Read Latency: 0 00:09:28.507 Relative Write Throughput: 0 00:09:28.507 Relative Write Latency: 0 00:09:28.507 Idle Power: Not Reported 00:09:28.507 Active Power: Not Reported 00:09:28.507 Non-Operational Permissive Mode: Not Supported 00:09:28.507 00:09:28.507 Health Information 00:09:28.507 ================== 00:09:28.507 Critical Warnings: 00:09:28.507 Available Spare Space: OK 00:09:28.507 Temperature: OK 00:09:28.507 Device Reliability: OK 00:09:28.507 Read Only: No 00:09:28.507 Volatile Memory Backup: OK 00:09:28.507 Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.507 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:28.507 Available Spare: 0% 00:09:28.507 Available Spare Threshold: 0% 00:09:28.507 Life Percentage Used: 0% 00:09:28.507 Data Units Read: 859 00:09:28.507 Data Units Written: 752 00:09:28.507 Host Read Commands: 36440 00:09:28.507 Host Write Commands: 35030 00:09:28.507 Controller Busy Time: 0 minutes 00:09:28.507 Power Cycles: 0 00:09:28.507 Power On Hours: 0 hours 00:09:28.507 Unsafe Shutdowns: 0 00:09:28.507 Unrecoverable Media Errors: 0 00:09:28.507 Lifetime Error Log Entries: 0 00:09:28.507 Warning Temperature Time: 0 minutes 00:09:28.507 Critical Temperature Time: 0 minutes 00:09:28.507 00:09:28.507 Number of Queues 00:09:28.507 ================ 00:09:28.507 Number of I/O Submission Queues: 64 00:09:28.507 Number of I/O Completion Queues: 64 00:09:28.507 00:09:28.507 ZNS Specific Controller Data 00:09:28.507 ============================ 00:09:28.507 Zone Append Size Limit: 0 00:09:28.507 00:09:28.507 00:09:28.507 Active Namespaces 00:09:28.507 ================= 00:09:28.507 Namespace ID:1 00:09:28.507 Error Recovery Timeout: Unlimited 00:09:28.507 Command Set Identifier: NVM (00h) 00:09:28.507 Deallocate: Supported 00:09:28.507 Deallocated/Unwritten Error: Supported 00:09:28.507 Deallocated Read Value: All 0x00 00:09:28.507 Deallocate in Write Zeroes: Not Supported 00:09:28.507 Deallocated Guard Field: 0xFFFF 00:09:28.507 Flush: Supported 00:09:28.507 Reservation: Not Supported 00:09:28.508 Namespace Sharing Capabilities: Multiple Controllers 00:09:28.508 Size (in LBAs): 262144 (1GiB) 00:09:28.508 Capacity (in LBAs): 262144 (1GiB) 00:09:28.508 Utilization (in LBAs): 262144 (1GiB) 00:09:28.508 Thin Provisioning: Not Supported 00:09:28.508 Per-NS Atomic Units: No 00:09:28.508 Maximum Single Source Range Length: 128 00:09:28.508 Maximum Copy Length: 128 00:09:28.508 Maximum Source Range Count: 128 00:09:28.508 NGUID/EUI64 Never Reused: No 00:09:28.508 Namespace Write Protected: No 00:09:28.508 Endurance group ID: 1 00:09:28.508 Number of LBA Formats: 8 00:09:28.508 Current LBA Format: LBA Format #04 00:09:28.508 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.508 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.508 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.508 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.508 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.508 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.508 LBA Format #06: Data Size[2024-07-21 11:51:27.360689] nvme_ctrlr.c:3486:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 80730 terminated unexpected 00:09:28.508 : 4096 Metadata Size: 16 00:09:28.508 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.508 00:09:28.508 Get Feature FDP: 00:09:28.508 ================ 00:09:28.508 Enabled: Yes 00:09:28.508 FDP configuration index: 0 00:09:28.508 00:09:28.508 FDP configurations log page 00:09:28.508 =========================== 00:09:28.508 Number of FDP configurations: 1 00:09:28.508 Version: 0 00:09:28.508 Size: 112 00:09:28.508 FDP Configuration Descriptor: 0 00:09:28.508 Descriptor Size: 96 00:09:28.508 Reclaim Group Identifier format: 2 00:09:28.508 FDP Volatile Write Cache: Not Present 00:09:28.508 FDP Configuration: Valid 00:09:28.508 Vendor Specific Size: 0 00:09:28.508 Number of Reclaim Groups: 2 00:09:28.508 Number of Recalim Unit Handles: 8 00:09:28.508 Max Placement Identifiers: 128 00:09:28.508 Number of Namespaces Suppprted: 256 00:09:28.508 Reclaim unit Nominal Size: 6000000 bytes 00:09:28.508 Estimated Reclaim Unit Time Limit: Not Reported 00:09:28.508 RUH Desc #000: RUH Type: Initially Isolated 00:09:28.508 RUH Desc #001: RUH Type: Initially Isolated 00:09:28.508 RUH Desc #002: RUH Type: Initially Isolated 00:09:28.508 RUH Desc #003: RUH Type: Initially Isolated 00:09:28.508 RUH Desc #004: RUH Type: Initially Isolated 00:09:28.508 RUH Desc #005: RUH Type: Initially Isolated 00:09:28.508 RUH Desc #006: RUH Type: Initially Isolated 00:09:28.508 RUH Desc #007: RUH Type: Initially Isolated 00:09:28.508 00:09:28.508 FDP reclaim unit handle usage log page 00:09:28.508 ====================================== 00:09:28.508 Number of Reclaim Unit Handles: 8 00:09:28.508 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:28.508 RUH Usage Desc #001: RUH Attributes: Unused 00:09:28.508 RUH Usage Desc #002: RUH Attributes: Unused 00:09:28.508 RUH Usage Desc #003: RUH Attributes: Unused 00:09:28.508 RUH Usage Desc #004: RUH Attributes: Unused 00:09:28.508 RUH Usage Desc #005: RUH Attributes: Unused 00:09:28.508 RUH Usage Desc #006: RUH Attributes: Unused 00:09:28.508 RUH Usage Desc #007: RUH Attributes: Unused 00:09:28.508 00:09:28.508 FDP statistics log page 00:09:28.508 ======================= 00:09:28.508 Host bytes with metadata written: 480747520 00:09:28.508 Media bytes with metadata written: 480800768 00:09:28.508 Media bytes erased: 0 00:09:28.508 00:09:28.508 FDP events log page 00:09:28.508 =================== 00:09:28.508 Number of FDP events: 0 00:09:28.508 00:09:28.508 ===================================================== 00:09:28.508 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:28.508 ===================================================== 00:09:28.508 Controller Capabilities/Features 00:09:28.508 ================================ 00:09:28.508 Vendor ID: 1b36 00:09:28.508 Subsystem Vendor ID: 1af4 00:09:28.508 Serial Number: 12342 00:09:28.508 Model Number: QEMU NVMe Ctrl 00:09:28.508 Firmware Version: 8.0.0 00:09:28.508 Recommended Arb Burst: 6 00:09:28.508 IEEE OUI Identifier: 00 54 52 00:09:28.508 Multi-path I/O 00:09:28.508 May have multiple subsystem ports: No 00:09:28.508 May have multiple controllers: No 00:09:28.508 Associated with SR-IOV VF: No 00:09:28.508 Max Data Transfer Size: 524288 00:09:28.508 Max Number of Namespaces: 256 00:09:28.508 Max Number of I/O Queues: 64 00:09:28.508 NVMe Specification Version (VS): 1.4 00:09:28.508 NVMe Specification Version (Identify): 1.4 00:09:28.508 Maximum Queue Entries: 2048 00:09:28.508 Contiguous Queues Required: Yes 00:09:28.508 Arbitration Mechanisms Supported 00:09:28.508 Weighted Round Robin: Not Supported 00:09:28.508 Vendor Specific: Not Supported 00:09:28.508 Reset Timeout: 7500 ms 00:09:28.508 Doorbell Stride: 4 bytes 00:09:28.508 NVM Subsystem Reset: Not Supported 00:09:28.508 Command Sets Supported 00:09:28.508 NVM Command Set: Supported 00:09:28.508 Boot Partition: Not Supported 00:09:28.508 Memory Page Size Minimum: 4096 bytes 00:09:28.508 Memory Page Size Maximum: 65536 bytes 00:09:28.508 Persistent Memory Region: Not Supported 00:09:28.508 Optional Asynchronous Events Supported 00:09:28.508 Namespace Attribute Notices: Supported 00:09:28.508 Firmware Activation Notices: Not Supported 00:09:28.508 ANA Change Notices: Not Supported 00:09:28.508 PLE Aggregate Log Change Notices: Not Supported 00:09:28.508 LBA Status Info Alert Notices: Not Supported 00:09:28.508 EGE Aggregate Log Change Notices: Not Supported 00:09:28.508 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.508 Zone Descriptor Change Notices: Not Supported 00:09:28.508 Discovery Log Change Notices: Not Supported 00:09:28.508 Controller Attributes 00:09:28.508 128-bit Host Identifier: Not Supported 00:09:28.508 Non-Operational Permissive Mode: Not Supported 00:09:28.508 NVM Sets: Not Supported 00:09:28.508 Read Recovery Levels: Not Supported 00:09:28.508 Endurance Groups: Not Supported 00:09:28.508 Predictable Latency Mode: Not Supported 00:09:28.508 Traffic Based Keep ALive: Not Supported 00:09:28.508 Namespace Granularity: Not Supported 00:09:28.508 SQ Associations: Not Supported 00:09:28.508 UUID List: Not Supported 00:09:28.508 Multi-Domain Subsystem: Not Supported 00:09:28.508 Fixed Capacity Management: Not Supported 00:09:28.508 Variable Capacity Management: Not Supported 00:09:28.508 Delete Endurance Group: Not Supported 00:09:28.508 Delete NVM Set: Not Supported 00:09:28.508 Extended LBA Formats Supported: Supported 00:09:28.508 Flexible Data Placement Supported: Not Supported 00:09:28.508 00:09:28.508 Controller Memory Buffer Support 00:09:28.508 ================================ 00:09:28.508 Supported: No 00:09:28.508 00:09:28.508 Persistent Memory Region Support 00:09:28.508 ================================ 00:09:28.508 Supported: No 00:09:28.508 00:09:28.508 Admin Command Set Attributes 00:09:28.508 ============================ 00:09:28.508 Security Send/Receive: Not Supported 00:09:28.508 Format NVM: Supported 00:09:28.508 Firmware Activate/Download: Not Supported 00:09:28.508 Namespace Management: Supported 00:09:28.508 Device Self-Test: Not Supported 00:09:28.508 Directives: Supported 00:09:28.508 NVMe-MI: Not Supported 00:09:28.508 Virtualization Management: Not Supported 00:09:28.508 Doorbell Buffer Config: Supported 00:09:28.508 Get LBA Status Capability: Not Supported 00:09:28.508 Command & Feature Lockdown Capability: Not Supported 00:09:28.508 Abort Command Limit: 4 00:09:28.508 Async Event Request Limit: 4 00:09:28.508 Number of Firmware Slots: N/A 00:09:28.508 Firmware Slot 1 Read-Only: N/A 00:09:28.508 Firmware Activation Without Reset: N/A 00:09:28.508 Multiple Update Detection Support: N/A 00:09:28.508 Firmware Update Granularity: No Information Provided 00:09:28.508 Per-Namespace SMART Log: Yes 00:09:28.508 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.508 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:28.508 Command Effects Log Page: Supported 00:09:28.508 Get Log Page Extended Data: Supported 00:09:28.508 Telemetry Log Pages: Not Supported 00:09:28.508 Persistent Event Log Pages: Not Supported 00:09:28.508 Supported Log Pages Log Page: May Support 00:09:28.508 Commands Supported & Effects Log Page: Not Supported 00:09:28.508 Feature Identifiers & Effects Log Page:May Support 00:09:28.508 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.508 Data Area 4 for Telemetry Log: Not Supported 00:09:28.508 Error Log Page Entries Supported: 1 00:09:28.508 Keep Alive: Not Supported 00:09:28.508 00:09:28.508 NVM Command Set Attributes 00:09:28.508 ========================== 00:09:28.508 Submission Queue Entry Size 00:09:28.508 Max: 64 00:09:28.508 Min: 64 00:09:28.508 Completion Queue Entry Size 00:09:28.508 Max: 16 00:09:28.508 Min: 16 00:09:28.508 Number of Namespaces: 256 00:09:28.508 Compare Command: Supported 00:09:28.508 Write Uncorrectable Command: Not Supported 00:09:28.508 Dataset Management Command: Supported 00:09:28.508 Write Zeroes Command: Supported 00:09:28.508 Set Features Save Field: Supported 00:09:28.508 Reservations: Not Supported 00:09:28.508 Timestamp: Supported 00:09:28.508 Copy: Supported 00:09:28.508 Volatile Write Cache: Present 00:09:28.508 Atomic Write Unit (Normal): 1 00:09:28.508 Atomic Write Unit (PFail): 1 00:09:28.508 Atomic Compare & Write Unit: 1 00:09:28.508 Fused Compare & Write: Not Supported 00:09:28.508 Scatter-Gather List 00:09:28.508 SGL Command Set: Supported 00:09:28.508 SGL Keyed: Not Supported 00:09:28.508 SGL Bit Bucket Descriptor: Not Supported 00:09:28.508 SGL Metadata Pointer: Not Supported 00:09:28.508 Oversized SGL: Not Supported 00:09:28.508 SGL Metadata Address: Not Supported 00:09:28.508 SGL Offset: Not Supported 00:09:28.508 Transport SGL Data Block: Not Supported 00:09:28.508 Replay Protected Memory Block: Not Supported 00:09:28.508 00:09:28.508 Firmware Slot Information 00:09:28.508 ========================= 00:09:28.508 Active slot: 1 00:09:28.508 Slot 1 Firmware Revision: 1.0 00:09:28.508 00:09:28.508 00:09:28.508 Commands Supported and Effects 00:09:28.508 ============================== 00:09:28.508 Admin Commands 00:09:28.508 -------------- 00:09:28.508 Delete I/O Submission Queue (00h): Supported 00:09:28.508 Create I/O Submission Queue (01h): Supported 00:09:28.508 Get Log Page (02h): Supported 00:09:28.508 Delete I/O Completion Queue (04h): Supported 00:09:28.508 Create I/O Completion Queue (05h): Supported 00:09:28.508 Identify (06h): Supported 00:09:28.508 Abort (08h): Supported 00:09:28.508 Set Features (09h): Supported 00:09:28.508 Get Features (0Ah): Supported 00:09:28.508 Asynchronous Event Request (0Ch): Supported 00:09:28.508 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.508 Directive Send (19h): Supported 00:09:28.508 Directive Receive (1Ah): Supported 00:09:28.508 Virtualization Management (1Ch): Supported 00:09:28.508 Doorbell Buffer Config (7Ch): Supported 00:09:28.508 Format NVM (80h): Supported LBA-Change 00:09:28.508 I/O Commands 00:09:28.508 ------------ 00:09:28.508 Flush (00h): Supported LBA-Change 00:09:28.508 Write (01h): Supported LBA-Change 00:09:28.508 Read (02h): Supported 00:09:28.508 Compare (05h): Supported 00:09:28.508 Write Zeroes (08h): Supported LBA-Change 00:09:28.508 Dataset Management (09h): Supported LBA-Change 00:09:28.508 Unknown (0Ch): Supported 00:09:28.508 Unknown (12h): Supported 00:09:28.508 Copy (19h): Supported LBA-Change 00:09:28.508 Unknown (1Dh): Supported LBA-Change 00:09:28.508 00:09:28.508 Error Log 00:09:28.508 ========= 00:09:28.508 00:09:28.508 Arbitration 00:09:28.508 =========== 00:09:28.508 Arbitration Burst: no limit 00:09:28.508 00:09:28.508 Power Management 00:09:28.508 ================ 00:09:28.508 Number of Power States: 1 00:09:28.508 Current Power State: Power State #0 00:09:28.508 Power State #0: 00:09:28.508 Max Power: 25.00 W 00:09:28.508 Non-Operational State: Operational 00:09:28.508 Entry Latency: 16 microseconds 00:09:28.508 Exit Latency: 4 microseconds 00:09:28.508 Relative Read Throughput: 0 00:09:28.508 Relative Read Latency: 0 00:09:28.508 Relative Write Throughput: 0 00:09:28.508 Relative Write Latency: 0 00:09:28.508 Idle Power: Not Reported 00:09:28.508 Active Power: Not Reported 00:09:28.508 Non-Operational Permissive Mode: Not Supported 00:09:28.508 00:09:28.508 Health Information 00:09:28.508 ================== 00:09:28.508 Critical Warnings: 00:09:28.508 Available Spare Space: OK 00:09:28.508 Temperature: OK 00:09:28.508 Device Reliability: OK 00:09:28.508 Read Only: No 00:09:28.508 Volatile Memory Backup: OK 00:09:28.508 Current Temperature: 323 Kelvin (50 Celsius) 00:09:28.508 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:28.508 Available Spare: 0% 00:09:28.508 Available Spare Threshold: 0% 00:09:28.508 Life Percentage Used: 0% 00:09:28.508 Data Units Read: 2351 00:09:28.508 Data Units Written: 2032 00:09:28.508 Host Read Commands: 107155 00:09:28.508 Host Write Commands: 102925 00:09:28.508 Controller Busy Time: 0 minutes 00:09:28.508 Power Cycles: 0 00:09:28.508 Power On Hours: 0 hours 00:09:28.508 Unsafe Shutdowns: 0 00:09:28.508 Unrecoverable Media Errors: 0 00:09:28.508 Lifetime Error Log Entries: 0 00:09:28.508 Warning Temperature Time: 0 minutes 00:09:28.508 Critical Temperature Time: 0 minutes 00:09:28.508 00:09:28.508 Number of Queues 00:09:28.508 ================ 00:09:28.508 Number of I/O Submission Queues: 64 00:09:28.508 Number of I/O Completion Queues: 64 00:09:28.508 00:09:28.508 ZNS Specific Controller Data 00:09:28.508 ============================ 00:09:28.508 Zone Append Size Limit: 0 00:09:28.508 00:09:28.508 00:09:28.508 Active Namespaces 00:09:28.508 ================= 00:09:28.508 Namespace ID:1 00:09:28.508 Error Recovery Timeout: Unlimited 00:09:28.508 Command Set Identifier: NVM (00h) 00:09:28.508 Deallocate: Supported 00:09:28.508 Deallocated/Unwritten Error: Supported 00:09:28.508 Deallocated Read Value: All 0x00 00:09:28.508 Deallocate in Write Zeroes: Not Supported 00:09:28.508 Deallocated Guard Field: 0xFFFF 00:09:28.508 Flush: Supported 00:09:28.508 Reservation: Not Supported 00:09:28.508 Namespace Sharing Capabilities: Private 00:09:28.508 Size (in LBAs): 1048576 (4GiB) 00:09:28.508 Capacity (in LBAs): 1048576 (4GiB) 00:09:28.508 Utilization (in LBAs): 1048576 (4GiB) 00:09:28.508 Thin Provisioning: Not Supported 00:09:28.508 Per-NS Atomic Units: No 00:09:28.768 Maximum Single Source Range Length: 128 00:09:28.768 Maximum Copy Length: 128 00:09:28.768 Maximum Source Range Count: 128 00:09:28.768 NGUID/EUI64 Never Reused: No 00:09:28.768 Namespace Write Protected: No 00:09:28.768 Number of LBA Formats: 8 00:09:28.768 Current LBA Format: LBA Format #04 00:09:28.768 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.768 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.768 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.768 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.768 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.768 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.768 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.768 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.768 00:09:28.768 Namespace ID:2 00:09:28.768 Error Recovery Timeout: Unlimited 00:09:28.768 Command Set Identifier: NVM (00h) 00:09:28.768 Deallocate: Supported 00:09:28.768 Deallocated/Unwritten Error: Supported 00:09:28.768 Deallocated Read Value: All 0x00 00:09:28.768 Deallocate in Write Zeroes: Not Supported 00:09:28.768 Deallocated Guard Field: 0xFFFF 00:09:28.768 Flush: Supported 00:09:28.768 Reservation: Not Supported 00:09:28.768 Namespace Sharing Capabilities: Private 00:09:28.768 Size (in LBAs): 1048576 (4GiB) 00:09:28.768 Capacity (in LBAs): 1048576 (4GiB) 00:09:28.768 Utilization (in LBAs): 1048576 (4GiB) 00:09:28.768 Thin Provisioning: Not Supported 00:09:28.768 Per-NS Atomic Units: No 00:09:28.768 Maximum Single Source Range Length: 128 00:09:28.768 Maximum Copy Length: 128 00:09:28.768 Maximum Source Range Count: 128 00:09:28.768 NGUID/EUI64 Never Reused: No 00:09:28.768 Namespace Write Protected: No 00:09:28.768 Number of LBA Formats: 8 00:09:28.768 Current LBA Format: LBA Format #04 00:09:28.768 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.768 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.768 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.768 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.768 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.768 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.768 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.768 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.768 00:09:28.768 Namespace ID:3 00:09:28.768 Error Recovery Timeout: Unlimited 00:09:28.768 Command Set Identifier: NVM (00h) 00:09:28.768 Deallocate: Supported 00:09:28.768 Deallocated/Unwritten Error: Supported 00:09:28.768 Deallocated Read Value: All 0x00 00:09:28.768 Deallocate in Write Zeroes: Not Supported 00:09:28.768 Deallocated Guard Field: 0xFFFF 00:09:28.768 Flush: Supported 00:09:28.768 Reservation: Not Supported 00:09:28.768 Namespace Sharing Capabilities: Private 00:09:28.768 Size (in LBAs): 1048576 (4GiB) 00:09:28.768 Capacity (in LBAs): 1048576 (4GiB) 00:09:28.768 Utilization (in LBAs): 1048576 (4GiB) 00:09:28.768 Thin Provisioning: Not Supported 00:09:28.768 Per-NS Atomic Units: No 00:09:28.768 Maximum Single Source Range Length: 128 00:09:28.768 Maximum Copy Length: 128 00:09:28.768 Maximum Source Range Count: 128 00:09:28.768 NGUID/EUI64 Never Reused: No 00:09:28.768 Namespace Write Protected: No 00:09:28.768 Number of LBA Formats: 8 00:09:28.769 Current LBA Format: LBA Format #04 00:09:28.769 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:28.769 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:28.769 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:28.769 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:28.769 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:28.769 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:28.769 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:28.769 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:28.769 00:09:28.769 11:51:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:28.769 11:51:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:28.769 ===================================================== 00:09:28.769 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:28.769 ===================================================== 00:09:28.769 Controller Capabilities/Features 00:09:28.769 ================================ 00:09:28.769 Vendor ID: 1b36 00:09:28.769 Subsystem Vendor ID: 1af4 00:09:28.769 Serial Number: 12340 00:09:28.769 Model Number: QEMU NVMe Ctrl 00:09:28.769 Firmware Version: 8.0.0 00:09:28.769 Recommended Arb Burst: 6 00:09:28.769 IEEE OUI Identifier: 00 54 52 00:09:28.769 Multi-path I/O 00:09:28.769 May have multiple subsystem ports: No 00:09:28.769 May have multiple controllers: No 00:09:28.769 Associated with SR-IOV VF: No 00:09:28.769 Max Data Transfer Size: 524288 00:09:28.769 Max Number of Namespaces: 256 00:09:28.769 Max Number of I/O Queues: 64 00:09:28.769 NVMe Specification Version (VS): 1.4 00:09:28.769 NVMe Specification Version (Identify): 1.4 00:09:28.769 Maximum Queue Entries: 2048 00:09:28.769 Contiguous Queues Required: Yes 00:09:28.769 Arbitration Mechanisms Supported 00:09:28.769 Weighted Round Robin: Not Supported 00:09:28.769 Vendor Specific: Not Supported 00:09:28.769 Reset Timeout: 7500 ms 00:09:28.769 Doorbell Stride: 4 bytes 00:09:28.769 NVM Subsystem Reset: Not Supported 00:09:28.769 Command Sets Supported 00:09:28.769 NVM Command Set: Supported 00:09:28.769 Boot Partition: Not Supported 00:09:28.769 Memory Page Size Minimum: 4096 bytes 00:09:28.769 Memory Page Size Maximum: 65536 bytes 00:09:28.769 Persistent Memory Region: Not Supported 00:09:28.769 Optional Asynchronous Events Supported 00:09:28.769 Namespace Attribute Notices: Supported 00:09:28.769 Firmware Activation Notices: Not Supported 00:09:28.769 ANA Change Notices: Not Supported 00:09:28.769 PLE Aggregate Log Change Notices: Not Supported 00:09:28.769 LBA Status Info Alert Notices: Not Supported 00:09:28.769 EGE Aggregate Log Change Notices: Not Supported 00:09:28.769 Normal NVM Subsystem Shutdown event: Not Supported 00:09:28.769 Zone Descriptor Change Notices: Not Supported 00:09:28.769 Discovery Log Change Notices: Not Supported 00:09:28.769 Controller Attributes 00:09:28.769 128-bit Host Identifier: Not Supported 00:09:28.769 Non-Operational Permissive Mode: Not Supported 00:09:28.769 NVM Sets: Not Supported 00:09:28.769 Read Recovery Levels: Not Supported 00:09:28.769 Endurance Groups: Not Supported 00:09:28.769 Predictable Latency Mode: Not Supported 00:09:28.769 Traffic Based Keep ALive: Not Supported 00:09:28.769 Namespace Granularity: Not Supported 00:09:28.769 SQ Associations: Not Supported 00:09:28.769 UUID List: Not Supported 00:09:28.769 Multi-Domain Subsystem: Not Supported 00:09:28.769 Fixed Capacity Management: Not Supported 00:09:28.769 Variable Capacity Management: Not Supported 00:09:28.769 Delete Endurance Group: Not Supported 00:09:28.769 Delete NVM Set: Not Supported 00:09:28.769 Extended LBA Formats Supported: Supported 00:09:28.769 Flexible Data Placement Supported: Not Supported 00:09:28.769 00:09:28.769 Controller Memory Buffer Support 00:09:28.769 ================================ 00:09:28.769 Supported: No 00:09:28.769 00:09:28.769 Persistent Memory Region Support 00:09:28.769 ================================ 00:09:28.769 Supported: No 00:09:28.769 00:09:28.769 Admin Command Set Attributes 00:09:28.769 ============================ 00:09:28.769 Security Send/Receive: Not Supported 00:09:28.769 Format NVM: Supported 00:09:28.769 Firmware Activate/Download: Not Supported 00:09:28.769 Namespace Management: Supported 00:09:28.769 Device Self-Test: Not Supported 00:09:28.769 Directives: Supported 00:09:28.769 NVMe-MI: Not Supported 00:09:28.769 Virtualization Management: Not Supported 00:09:28.769 Doorbell Buffer Config: Supported 00:09:28.769 Get LBA Status Capability: Not Supported 00:09:28.769 Command & Feature Lockdown Capability: Not Supported 00:09:28.769 Abort Command Limit: 4 00:09:28.769 Async Event Request Limit: 4 00:09:28.769 Number of Firmware Slots: N/A 00:09:28.769 Firmware Slot 1 Read-Only: N/A 00:09:28.769 Firmware Activation Without Reset: N/A 00:09:28.769 Multiple Update Detection Support: N/A 00:09:28.769 Firmware Update Granularity: No Information Provided 00:09:28.769 Per-Namespace SMART Log: Yes 00:09:28.769 Asymmetric Namespace Access Log Page: Not Supported 00:09:28.769 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:28.769 Command Effects Log Page: Supported 00:09:28.769 Get Log Page Extended Data: Supported 00:09:28.769 Telemetry Log Pages: Not Supported 00:09:28.769 Persistent Event Log Pages: Not Supported 00:09:28.769 Supported Log Pages Log Page: May Support 00:09:28.769 Commands Supported & Effects Log Page: Not Supported 00:09:28.769 Feature Identifiers & Effects Log Page:May Support 00:09:28.769 NVMe-MI Commands & Effects Log Page: May Support 00:09:28.769 Data Area 4 for Telemetry Log: Not Supported 00:09:28.769 Error Log Page Entries Supported: 1 00:09:28.769 Keep Alive: Not Supported 00:09:28.769 00:09:28.769 NVM Command Set Attributes 00:09:28.769 ========================== 00:09:28.769 Submission Queue Entry Size 00:09:28.769 Max: 64 00:09:28.769 Min: 64 00:09:28.769 Completion Queue Entry Size 00:09:28.769 Max: 16 00:09:28.769 Min: 16 00:09:28.769 Number of Namespaces: 256 00:09:28.769 Compare Command: Supported 00:09:28.769 Write Uncorrectable Command: Not Supported 00:09:28.769 Dataset Management Command: Supported 00:09:28.769 Write Zeroes Command: Supported 00:09:28.769 Set Features Save Field: Supported 00:09:28.769 Reservations: Not Supported 00:09:28.769 Timestamp: Supported 00:09:28.769 Copy: Supported 00:09:28.769 Volatile Write Cache: Present 00:09:28.769 Atomic Write Unit (Normal): 1 00:09:28.769 Atomic Write Unit (PFail): 1 00:09:28.769 Atomic Compare & Write Unit: 1 00:09:28.769 Fused Compare & Write: Not Supported 00:09:28.769 Scatter-Gather List 00:09:28.769 SGL Command Set: Supported 00:09:28.769 SGL Keyed: Not Supported 00:09:28.769 SGL Bit Bucket Descriptor: Not Supported 00:09:28.769 SGL Metadata Pointer: Not Supported 00:09:28.769 Oversized SGL: Not Supported 00:09:28.769 SGL Metadata Address: Not Supported 00:09:28.769 SGL Offset: Not Supported 00:09:28.769 Transport SGL Data Block: Not Supported 00:09:28.769 Replay Protected Memory Block: Not Supported 00:09:28.769 00:09:28.769 Firmware Slot Information 00:09:28.769 ========================= 00:09:28.769 Active slot: 1 00:09:28.769 Slot 1 Firmware Revision: 1.0 00:09:28.769 00:09:28.769 00:09:28.769 Commands Supported and Effects 00:09:28.769 ============================== 00:09:28.769 Admin Commands 00:09:28.769 -------------- 00:09:28.769 Delete I/O Submission Queue (00h): Supported 00:09:28.769 Create I/O Submission Queue (01h): Supported 00:09:28.769 Get Log Page (02h): Supported 00:09:28.769 Delete I/O Completion Queue (04h): Supported 00:09:28.769 Create I/O Completion Queue (05h): Supported 00:09:28.769 Identify (06h): Supported 00:09:28.769 Abort (08h): Supported 00:09:28.769 Set Features (09h): Supported 00:09:28.769 Get Features (0Ah): Supported 00:09:28.769 Asynchronous Event Request (0Ch): Supported 00:09:28.769 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:28.769 Directive Send (19h): Supported 00:09:28.769 Directive Receive (1Ah): Supported 00:09:28.769 Virtualization Management (1Ch): Supported 00:09:28.769 Doorbell Buffer Config (7Ch): Supported 00:09:28.769 Format NVM (80h): Supported LBA-Change 00:09:28.769 I/O Commands 00:09:28.769 ------------ 00:09:28.769 Flush (00h): Supported LBA-Change 00:09:28.769 Write (01h): Supported LBA-Change 00:09:28.769 Read (02h): Supported 00:09:28.769 Compare (05h): Supported 00:09:28.769 Write Zeroes (08h): Supported LBA-Change 00:09:28.769 Dataset Management (09h): Supported LBA-Change 00:09:28.769 Unknown (0Ch): Supported 00:09:28.769 Unknown (12h): Supported 00:09:28.769 Copy (19h): Supported LBA-Change 00:09:28.769 Unknown (1Dh): Supported LBA-Change 00:09:28.769 00:09:28.769 Error Log 00:09:28.769 ========= 00:09:28.769 00:09:28.769 Arbitration 00:09:28.769 =========== 00:09:28.769 Arbitration Burst: no limit 00:09:28.769 00:09:28.769 Power Management 00:09:28.769 ================ 00:09:28.769 Number of Power States: 1 00:09:28.769 Current Power State: Power State #0 00:09:28.769 Power State #0: 00:09:28.769 Max Power: 25.00 W 00:09:28.769 Non-Operational State: Operational 00:09:28.769 Entry Latency: 16 microseconds 00:09:28.769 Exit Latency: 4 microseconds 00:09:28.769 Relative Read Throughput: 0 00:09:28.769 Relative Read Latency: 0 00:09:28.769 Relative Write Throughput: 0 00:09:28.769 Relative Write Latency: 0 00:09:29.029 Idle Power: Not Reported 00:09:29.029 Active Power: Not Reported 00:09:29.029 Non-Operational Permissive Mode: Not Supported 00:09:29.029 00:09:29.029 Health Information 00:09:29.029 ================== 00:09:29.029 Critical Warnings: 00:09:29.029 Available Spare Space: OK 00:09:29.029 Temperature: OK 00:09:29.029 Device Reliability: OK 00:09:29.029 Read Only: No 00:09:29.029 Volatile Memory Backup: OK 00:09:29.029 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.029 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.029 Available Spare: 0% 00:09:29.029 Available Spare Threshold: 0% 00:09:29.029 Life Percentage Used: 0% 00:09:29.029 Data Units Read: 1079 00:09:29.029 Data Units Written: 907 00:09:29.029 Host Read Commands: 50350 00:09:29.029 Host Write Commands: 48793 00:09:29.029 Controller Busy Time: 0 minutes 00:09:29.029 Power Cycles: 0 00:09:29.029 Power On Hours: 0 hours 00:09:29.029 Unsafe Shutdowns: 0 00:09:29.029 Unrecoverable Media Errors: 0 00:09:29.029 Lifetime Error Log Entries: 0 00:09:29.029 Warning Temperature Time: 0 minutes 00:09:29.029 Critical Temperature Time: 0 minutes 00:09:29.029 00:09:29.029 Number of Queues 00:09:29.029 ================ 00:09:29.029 Number of I/O Submission Queues: 64 00:09:29.029 Number of I/O Completion Queues: 64 00:09:29.029 00:09:29.029 ZNS Specific Controller Data 00:09:29.029 ============================ 00:09:29.029 Zone Append Size Limit: 0 00:09:29.029 00:09:29.029 00:09:29.029 Active Namespaces 00:09:29.029 ================= 00:09:29.029 Namespace ID:1 00:09:29.029 Error Recovery Timeout: Unlimited 00:09:29.029 Command Set Identifier: NVM (00h) 00:09:29.029 Deallocate: Supported 00:09:29.029 Deallocated/Unwritten Error: Supported 00:09:29.029 Deallocated Read Value: All 0x00 00:09:29.029 Deallocate in Write Zeroes: Not Supported 00:09:29.029 Deallocated Guard Field: 0xFFFF 00:09:29.029 Flush: Supported 00:09:29.029 Reservation: Not Supported 00:09:29.029 Metadata Transferred as: Separate Metadata Buffer 00:09:29.029 Namespace Sharing Capabilities: Private 00:09:29.029 Size (in LBAs): 1548666 (5GiB) 00:09:29.029 Capacity (in LBAs): 1548666 (5GiB) 00:09:29.029 Utilization (in LBAs): 1548666 (5GiB) 00:09:29.029 Thin Provisioning: Not Supported 00:09:29.029 Per-NS Atomic Units: No 00:09:29.029 Maximum Single Source Range Length: 128 00:09:29.029 Maximum Copy Length: 128 00:09:29.029 Maximum Source Range Count: 128 00:09:29.029 NGUID/EUI64 Never Reused: No 00:09:29.029 Namespace Write Protected: No 00:09:29.029 Number of LBA Formats: 8 00:09:29.029 Current LBA Format: LBA Format #07 00:09:29.029 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.029 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.029 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.029 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.029 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.029 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.029 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.029 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.029 00:09:29.029 11:51:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:29.029 11:51:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:29.029 ===================================================== 00:09:29.029 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:29.029 ===================================================== 00:09:29.029 Controller Capabilities/Features 00:09:29.029 ================================ 00:09:29.029 Vendor ID: 1b36 00:09:29.029 Subsystem Vendor ID: 1af4 00:09:29.029 Serial Number: 12341 00:09:29.029 Model Number: QEMU NVMe Ctrl 00:09:29.029 Firmware Version: 8.0.0 00:09:29.029 Recommended Arb Burst: 6 00:09:29.030 IEEE OUI Identifier: 00 54 52 00:09:29.030 Multi-path I/O 00:09:29.030 May have multiple subsystem ports: No 00:09:29.030 May have multiple controllers: No 00:09:29.030 Associated with SR-IOV VF: No 00:09:29.030 Max Data Transfer Size: 524288 00:09:29.030 Max Number of Namespaces: 256 00:09:29.030 Max Number of I/O Queues: 64 00:09:29.030 NVMe Specification Version (VS): 1.4 00:09:29.030 NVMe Specification Version (Identify): 1.4 00:09:29.030 Maximum Queue Entries: 2048 00:09:29.030 Contiguous Queues Required: Yes 00:09:29.030 Arbitration Mechanisms Supported 00:09:29.030 Weighted Round Robin: Not Supported 00:09:29.030 Vendor Specific: Not Supported 00:09:29.030 Reset Timeout: 7500 ms 00:09:29.030 Doorbell Stride: 4 bytes 00:09:29.030 NVM Subsystem Reset: Not Supported 00:09:29.030 Command Sets Supported 00:09:29.030 NVM Command Set: Supported 00:09:29.030 Boot Partition: Not Supported 00:09:29.030 Memory Page Size Minimum: 4096 bytes 00:09:29.030 Memory Page Size Maximum: 65536 bytes 00:09:29.030 Persistent Memory Region: Not Supported 00:09:29.030 Optional Asynchronous Events Supported 00:09:29.030 Namespace Attribute Notices: Supported 00:09:29.030 Firmware Activation Notices: Not Supported 00:09:29.030 ANA Change Notices: Not Supported 00:09:29.030 PLE Aggregate Log Change Notices: Not Supported 00:09:29.030 LBA Status Info Alert Notices: Not Supported 00:09:29.030 EGE Aggregate Log Change Notices: Not Supported 00:09:29.030 Normal NVM Subsystem Shutdown event: Not Supported 00:09:29.030 Zone Descriptor Change Notices: Not Supported 00:09:29.030 Discovery Log Change Notices: Not Supported 00:09:29.030 Controller Attributes 00:09:29.030 128-bit Host Identifier: Not Supported 00:09:29.030 Non-Operational Permissive Mode: Not Supported 00:09:29.030 NVM Sets: Not Supported 00:09:29.030 Read Recovery Levels: Not Supported 00:09:29.030 Endurance Groups: Not Supported 00:09:29.030 Predictable Latency Mode: Not Supported 00:09:29.030 Traffic Based Keep ALive: Not Supported 00:09:29.030 Namespace Granularity: Not Supported 00:09:29.030 SQ Associations: Not Supported 00:09:29.030 UUID List: Not Supported 00:09:29.030 Multi-Domain Subsystem: Not Supported 00:09:29.030 Fixed Capacity Management: Not Supported 00:09:29.030 Variable Capacity Management: Not Supported 00:09:29.030 Delete Endurance Group: Not Supported 00:09:29.030 Delete NVM Set: Not Supported 00:09:29.030 Extended LBA Formats Supported: Supported 00:09:29.030 Flexible Data Placement Supported: Not Supported 00:09:29.030 00:09:29.030 Controller Memory Buffer Support 00:09:29.030 ================================ 00:09:29.030 Supported: No 00:09:29.030 00:09:29.030 Persistent Memory Region Support 00:09:29.030 ================================ 00:09:29.030 Supported: No 00:09:29.030 00:09:29.030 Admin Command Set Attributes 00:09:29.030 ============================ 00:09:29.030 Security Send/Receive: Not Supported 00:09:29.030 Format NVM: Supported 00:09:29.030 Firmware Activate/Download: Not Supported 00:09:29.030 Namespace Management: Supported 00:09:29.030 Device Self-Test: Not Supported 00:09:29.030 Directives: Supported 00:09:29.030 NVMe-MI: Not Supported 00:09:29.030 Virtualization Management: Not Supported 00:09:29.030 Doorbell Buffer Config: Supported 00:09:29.030 Get LBA Status Capability: Not Supported 00:09:29.030 Command & Feature Lockdown Capability: Not Supported 00:09:29.030 Abort Command Limit: 4 00:09:29.030 Async Event Request Limit: 4 00:09:29.030 Number of Firmware Slots: N/A 00:09:29.030 Firmware Slot 1 Read-Only: N/A 00:09:29.030 Firmware Activation Without Reset: N/A 00:09:29.030 Multiple Update Detection Support: N/A 00:09:29.030 Firmware Update Granularity: No Information Provided 00:09:29.030 Per-Namespace SMART Log: Yes 00:09:29.030 Asymmetric Namespace Access Log Page: Not Supported 00:09:29.030 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:29.030 Command Effects Log Page: Supported 00:09:29.030 Get Log Page Extended Data: Supported 00:09:29.030 Telemetry Log Pages: Not Supported 00:09:29.030 Persistent Event Log Pages: Not Supported 00:09:29.030 Supported Log Pages Log Page: May Support 00:09:29.030 Commands Supported & Effects Log Page: Not Supported 00:09:29.030 Feature Identifiers & Effects Log Page:May Support 00:09:29.030 NVMe-MI Commands & Effects Log Page: May Support 00:09:29.030 Data Area 4 for Telemetry Log: Not Supported 00:09:29.030 Error Log Page Entries Supported: 1 00:09:29.030 Keep Alive: Not Supported 00:09:29.030 00:09:29.030 NVM Command Set Attributes 00:09:29.030 ========================== 00:09:29.030 Submission Queue Entry Size 00:09:29.030 Max: 64 00:09:29.030 Min: 64 00:09:29.030 Completion Queue Entry Size 00:09:29.030 Max: 16 00:09:29.030 Min: 16 00:09:29.030 Number of Namespaces: 256 00:09:29.030 Compare Command: Supported 00:09:29.030 Write Uncorrectable Command: Not Supported 00:09:29.030 Dataset Management Command: Supported 00:09:29.030 Write Zeroes Command: Supported 00:09:29.030 Set Features Save Field: Supported 00:09:29.030 Reservations: Not Supported 00:09:29.030 Timestamp: Supported 00:09:29.030 Copy: Supported 00:09:29.030 Volatile Write Cache: Present 00:09:29.030 Atomic Write Unit (Normal): 1 00:09:29.030 Atomic Write Unit (PFail): 1 00:09:29.030 Atomic Compare & Write Unit: 1 00:09:29.030 Fused Compare & Write: Not Supported 00:09:29.030 Scatter-Gather List 00:09:29.030 SGL Command Set: Supported 00:09:29.030 SGL Keyed: Not Supported 00:09:29.030 SGL Bit Bucket Descriptor: Not Supported 00:09:29.030 SGL Metadata Pointer: Not Supported 00:09:29.030 Oversized SGL: Not Supported 00:09:29.030 SGL Metadata Address: Not Supported 00:09:29.030 SGL Offset: Not Supported 00:09:29.030 Transport SGL Data Block: Not Supported 00:09:29.030 Replay Protected Memory Block: Not Supported 00:09:29.030 00:09:29.030 Firmware Slot Information 00:09:29.030 ========================= 00:09:29.030 Active slot: 1 00:09:29.030 Slot 1 Firmware Revision: 1.0 00:09:29.030 00:09:29.030 00:09:29.030 Commands Supported and Effects 00:09:29.030 ============================== 00:09:29.030 Admin Commands 00:09:29.030 -------------- 00:09:29.030 Delete I/O Submission Queue (00h): Supported 00:09:29.030 Create I/O Submission Queue (01h): Supported 00:09:29.030 Get Log Page (02h): Supported 00:09:29.030 Delete I/O Completion Queue (04h): Supported 00:09:29.030 Create I/O Completion Queue (05h): Supported 00:09:29.030 Identify (06h): Supported 00:09:29.030 Abort (08h): Supported 00:09:29.030 Set Features (09h): Supported 00:09:29.030 Get Features (0Ah): Supported 00:09:29.030 Asynchronous Event Request (0Ch): Supported 00:09:29.030 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:29.030 Directive Send (19h): Supported 00:09:29.030 Directive Receive (1Ah): Supported 00:09:29.030 Virtualization Management (1Ch): Supported 00:09:29.030 Doorbell Buffer Config (7Ch): Supported 00:09:29.030 Format NVM (80h): Supported LBA-Change 00:09:29.030 I/O Commands 00:09:29.030 ------------ 00:09:29.030 Flush (00h): Supported LBA-Change 00:09:29.030 Write (01h): Supported LBA-Change 00:09:29.030 Read (02h): Supported 00:09:29.030 Compare (05h): Supported 00:09:29.030 Write Zeroes (08h): Supported LBA-Change 00:09:29.030 Dataset Management (09h): Supported LBA-Change 00:09:29.030 Unknown (0Ch): Supported 00:09:29.030 Unknown (12h): Supported 00:09:29.030 Copy (19h): Supported LBA-Change 00:09:29.030 Unknown (1Dh): Supported LBA-Change 00:09:29.030 00:09:29.030 Error Log 00:09:29.030 ========= 00:09:29.030 00:09:29.030 Arbitration 00:09:29.030 =========== 00:09:29.030 Arbitration Burst: no limit 00:09:29.030 00:09:29.030 Power Management 00:09:29.030 ================ 00:09:29.030 Number of Power States: 1 00:09:29.030 Current Power State: Power State #0 00:09:29.030 Power State #0: 00:09:29.030 Max Power: 25.00 W 00:09:29.030 Non-Operational State: Operational 00:09:29.030 Entry Latency: 16 microseconds 00:09:29.030 Exit Latency: 4 microseconds 00:09:29.030 Relative Read Throughput: 0 00:09:29.030 Relative Read Latency: 0 00:09:29.030 Relative Write Throughput: 0 00:09:29.030 Relative Write Latency: 0 00:09:29.289 Idle Power: Not Reported 00:09:29.289 Active Power: Not Reported 00:09:29.289 Non-Operational Permissive Mode: Not Supported 00:09:29.289 00:09:29.289 Health Information 00:09:29.289 ================== 00:09:29.289 Critical Warnings: 00:09:29.289 Available Spare Space: OK 00:09:29.289 Temperature: OK 00:09:29.289 Device Reliability: OK 00:09:29.289 Read Only: No 00:09:29.289 Volatile Memory Backup: OK 00:09:29.289 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.289 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.289 Available Spare: 0% 00:09:29.289 Available Spare Threshold: 0% 00:09:29.289 Life Percentage Used: 0% 00:09:29.289 Data Units Read: 807 00:09:29.289 Data Units Written: 657 00:09:29.289 Host Read Commands: 36335 00:09:29.289 Host Write Commands: 34106 00:09:29.289 Controller Busy Time: 0 minutes 00:09:29.289 Power Cycles: 0 00:09:29.289 Power On Hours: 0 hours 00:09:29.289 Unsafe Shutdowns: 0 00:09:29.289 Unrecoverable Media Errors: 0 00:09:29.289 Lifetime Error Log Entries: 0 00:09:29.289 Warning Temperature Time: 0 minutes 00:09:29.289 Critical Temperature Time: 0 minutes 00:09:29.289 00:09:29.289 Number of Queues 00:09:29.289 ================ 00:09:29.289 Number of I/O Submission Queues: 64 00:09:29.289 Number of I/O Completion Queues: 64 00:09:29.289 00:09:29.289 ZNS Specific Controller Data 00:09:29.289 ============================ 00:09:29.289 Zone Append Size Limit: 0 00:09:29.289 00:09:29.289 00:09:29.289 Active Namespaces 00:09:29.289 ================= 00:09:29.289 Namespace ID:1 00:09:29.289 Error Recovery Timeout: Unlimited 00:09:29.289 Command Set Identifier: NVM (00h) 00:09:29.289 Deallocate: Supported 00:09:29.289 Deallocated/Unwritten Error: Supported 00:09:29.289 Deallocated Read Value: All 0x00 00:09:29.289 Deallocate in Write Zeroes: Not Supported 00:09:29.289 Deallocated Guard Field: 0xFFFF 00:09:29.289 Flush: Supported 00:09:29.289 Reservation: Not Supported 00:09:29.289 Namespace Sharing Capabilities: Private 00:09:29.289 Size (in LBAs): 1310720 (5GiB) 00:09:29.289 Capacity (in LBAs): 1310720 (5GiB) 00:09:29.289 Utilization (in LBAs): 1310720 (5GiB) 00:09:29.289 Thin Provisioning: Not Supported 00:09:29.289 Per-NS Atomic Units: No 00:09:29.289 Maximum Single Source Range Length: 128 00:09:29.289 Maximum Copy Length: 128 00:09:29.289 Maximum Source Range Count: 128 00:09:29.289 NGUID/EUI64 Never Reused: No 00:09:29.289 Namespace Write Protected: No 00:09:29.289 Number of LBA Formats: 8 00:09:29.289 Current LBA Format: LBA Format #04 00:09:29.289 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.289 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.289 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.289 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.289 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.289 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.289 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.289 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.289 00:09:29.289 11:51:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:29.289 11:51:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:29.289 ===================================================== 00:09:29.289 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:29.289 ===================================================== 00:09:29.289 Controller Capabilities/Features 00:09:29.289 ================================ 00:09:29.289 Vendor ID: 1b36 00:09:29.289 Subsystem Vendor ID: 1af4 00:09:29.289 Serial Number: 12342 00:09:29.289 Model Number: QEMU NVMe Ctrl 00:09:29.289 Firmware Version: 8.0.0 00:09:29.289 Recommended Arb Burst: 6 00:09:29.289 IEEE OUI Identifier: 00 54 52 00:09:29.289 Multi-path I/O 00:09:29.289 May have multiple subsystem ports: No 00:09:29.289 May have multiple controllers: No 00:09:29.289 Associated with SR-IOV VF: No 00:09:29.289 Max Data Transfer Size: 524288 00:09:29.289 Max Number of Namespaces: 256 00:09:29.289 Max Number of I/O Queues: 64 00:09:29.289 NVMe Specification Version (VS): 1.4 00:09:29.289 NVMe Specification Version (Identify): 1.4 00:09:29.289 Maximum Queue Entries: 2048 00:09:29.289 Contiguous Queues Required: Yes 00:09:29.289 Arbitration Mechanisms Supported 00:09:29.289 Weighted Round Robin: Not Supported 00:09:29.289 Vendor Specific: Not Supported 00:09:29.289 Reset Timeout: 7500 ms 00:09:29.289 Doorbell Stride: 4 bytes 00:09:29.289 NVM Subsystem Reset: Not Supported 00:09:29.289 Command Sets Supported 00:09:29.289 NVM Command Set: Supported 00:09:29.290 Boot Partition: Not Supported 00:09:29.290 Memory Page Size Minimum: 4096 bytes 00:09:29.290 Memory Page Size Maximum: 65536 bytes 00:09:29.290 Persistent Memory Region: Not Supported 00:09:29.290 Optional Asynchronous Events Supported 00:09:29.290 Namespace Attribute Notices: Supported 00:09:29.290 Firmware Activation Notices: Not Supported 00:09:29.290 ANA Change Notices: Not Supported 00:09:29.290 PLE Aggregate Log Change Notices: Not Supported 00:09:29.290 LBA Status Info Alert Notices: Not Supported 00:09:29.290 EGE Aggregate Log Change Notices: Not Supported 00:09:29.290 Normal NVM Subsystem Shutdown event: Not Supported 00:09:29.290 Zone Descriptor Change Notices: Not Supported 00:09:29.290 Discovery Log Change Notices: Not Supported 00:09:29.290 Controller Attributes 00:09:29.290 128-bit Host Identifier: Not Supported 00:09:29.290 Non-Operational Permissive Mode: Not Supported 00:09:29.290 NVM Sets: Not Supported 00:09:29.290 Read Recovery Levels: Not Supported 00:09:29.290 Endurance Groups: Not Supported 00:09:29.290 Predictable Latency Mode: Not Supported 00:09:29.290 Traffic Based Keep ALive: Not Supported 00:09:29.290 Namespace Granularity: Not Supported 00:09:29.290 SQ Associations: Not Supported 00:09:29.290 UUID List: Not Supported 00:09:29.290 Multi-Domain Subsystem: Not Supported 00:09:29.290 Fixed Capacity Management: Not Supported 00:09:29.290 Variable Capacity Management: Not Supported 00:09:29.290 Delete Endurance Group: Not Supported 00:09:29.290 Delete NVM Set: Not Supported 00:09:29.290 Extended LBA Formats Supported: Supported 00:09:29.290 Flexible Data Placement Supported: Not Supported 00:09:29.290 00:09:29.290 Controller Memory Buffer Support 00:09:29.290 ================================ 00:09:29.290 Supported: No 00:09:29.290 00:09:29.290 Persistent Memory Region Support 00:09:29.290 ================================ 00:09:29.290 Supported: No 00:09:29.290 00:09:29.290 Admin Command Set Attributes 00:09:29.290 ============================ 00:09:29.290 Security Send/Receive: Not Supported 00:09:29.290 Format NVM: Supported 00:09:29.290 Firmware Activate/Download: Not Supported 00:09:29.290 Namespace Management: Supported 00:09:29.290 Device Self-Test: Not Supported 00:09:29.290 Directives: Supported 00:09:29.290 NVMe-MI: Not Supported 00:09:29.290 Virtualization Management: Not Supported 00:09:29.290 Doorbell Buffer Config: Supported 00:09:29.290 Get LBA Status Capability: Not Supported 00:09:29.290 Command & Feature Lockdown Capability: Not Supported 00:09:29.290 Abort Command Limit: 4 00:09:29.290 Async Event Request Limit: 4 00:09:29.290 Number of Firmware Slots: N/A 00:09:29.290 Firmware Slot 1 Read-Only: N/A 00:09:29.290 Firmware Activation Without Reset: N/A 00:09:29.290 Multiple Update Detection Support: N/A 00:09:29.290 Firmware Update Granularity: No Information Provided 00:09:29.290 Per-Namespace SMART Log: Yes 00:09:29.290 Asymmetric Namespace Access Log Page: Not Supported 00:09:29.290 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:29.290 Command Effects Log Page: Supported 00:09:29.290 Get Log Page Extended Data: Supported 00:09:29.290 Telemetry Log Pages: Not Supported 00:09:29.290 Persistent Event Log Pages: Not Supported 00:09:29.290 Supported Log Pages Log Page: May Support 00:09:29.290 Commands Supported & Effects Log Page: Not Supported 00:09:29.290 Feature Identifiers & Effects Log Page:May Support 00:09:29.290 NVMe-MI Commands & Effects Log Page: May Support 00:09:29.290 Data Area 4 for Telemetry Log: Not Supported 00:09:29.290 Error Log Page Entries Supported: 1 00:09:29.290 Keep Alive: Not Supported 00:09:29.290 00:09:29.290 NVM Command Set Attributes 00:09:29.290 ========================== 00:09:29.290 Submission Queue Entry Size 00:09:29.290 Max: 64 00:09:29.290 Min: 64 00:09:29.290 Completion Queue Entry Size 00:09:29.290 Max: 16 00:09:29.290 Min: 16 00:09:29.290 Number of Namespaces: 256 00:09:29.290 Compare Command: Supported 00:09:29.290 Write Uncorrectable Command: Not Supported 00:09:29.290 Dataset Management Command: Supported 00:09:29.290 Write Zeroes Command: Supported 00:09:29.290 Set Features Save Field: Supported 00:09:29.290 Reservations: Not Supported 00:09:29.290 Timestamp: Supported 00:09:29.290 Copy: Supported 00:09:29.290 Volatile Write Cache: Present 00:09:29.290 Atomic Write Unit (Normal): 1 00:09:29.290 Atomic Write Unit (PFail): 1 00:09:29.290 Atomic Compare & Write Unit: 1 00:09:29.290 Fused Compare & Write: Not Supported 00:09:29.290 Scatter-Gather List 00:09:29.290 SGL Command Set: Supported 00:09:29.290 SGL Keyed: Not Supported 00:09:29.290 SGL Bit Bucket Descriptor: Not Supported 00:09:29.290 SGL Metadata Pointer: Not Supported 00:09:29.290 Oversized SGL: Not Supported 00:09:29.290 SGL Metadata Address: Not Supported 00:09:29.290 SGL Offset: Not Supported 00:09:29.290 Transport SGL Data Block: Not Supported 00:09:29.290 Replay Protected Memory Block: Not Supported 00:09:29.290 00:09:29.290 Firmware Slot Information 00:09:29.290 ========================= 00:09:29.290 Active slot: 1 00:09:29.290 Slot 1 Firmware Revision: 1.0 00:09:29.290 00:09:29.290 00:09:29.290 Commands Supported and Effects 00:09:29.290 ============================== 00:09:29.290 Admin Commands 00:09:29.290 -------------- 00:09:29.290 Delete I/O Submission Queue (00h): Supported 00:09:29.290 Create I/O Submission Queue (01h): Supported 00:09:29.290 Get Log Page (02h): Supported 00:09:29.290 Delete I/O Completion Queue (04h): Supported 00:09:29.290 Create I/O Completion Queue (05h): Supported 00:09:29.290 Identify (06h): Supported 00:09:29.290 Abort (08h): Supported 00:09:29.290 Set Features (09h): Supported 00:09:29.290 Get Features (0Ah): Supported 00:09:29.290 Asynchronous Event Request (0Ch): Supported 00:09:29.290 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:29.290 Directive Send (19h): Supported 00:09:29.290 Directive Receive (1Ah): Supported 00:09:29.290 Virtualization Management (1Ch): Supported 00:09:29.290 Doorbell Buffer Config (7Ch): Supported 00:09:29.290 Format NVM (80h): Supported LBA-Change 00:09:29.290 I/O Commands 00:09:29.290 ------------ 00:09:29.290 Flush (00h): Supported LBA-Change 00:09:29.290 Write (01h): Supported LBA-Change 00:09:29.290 Read (02h): Supported 00:09:29.290 Compare (05h): Supported 00:09:29.290 Write Zeroes (08h): Supported LBA-Change 00:09:29.290 Dataset Management (09h): Supported LBA-Change 00:09:29.290 Unknown (0Ch): Supported 00:09:29.290 Unknown (12h): Supported 00:09:29.290 Copy (19h): Supported LBA-Change 00:09:29.290 Unknown (1Dh): Supported LBA-Change 00:09:29.290 00:09:29.290 Error Log 00:09:29.290 ========= 00:09:29.290 00:09:29.290 Arbitration 00:09:29.290 =========== 00:09:29.290 Arbitration Burst: no limit 00:09:29.290 00:09:29.290 Power Management 00:09:29.290 ================ 00:09:29.290 Number of Power States: 1 00:09:29.290 Current Power State: Power State #0 00:09:29.290 Power State #0: 00:09:29.290 Max Power: 25.00 W 00:09:29.290 Non-Operational State: Operational 00:09:29.290 Entry Latency: 16 microseconds 00:09:29.290 Exit Latency: 4 microseconds 00:09:29.290 Relative Read Throughput: 0 00:09:29.290 Relative Read Latency: 0 00:09:29.290 Relative Write Throughput: 0 00:09:29.290 Relative Write Latency: 0 00:09:29.290 Idle Power: Not Reported 00:09:29.290 Active Power: Not Reported 00:09:29.290 Non-Operational Permissive Mode: Not Supported 00:09:29.290 00:09:29.290 Health Information 00:09:29.290 ================== 00:09:29.290 Critical Warnings: 00:09:29.290 Available Spare Space: OK 00:09:29.290 Temperature: OK 00:09:29.290 Device Reliability: OK 00:09:29.290 Read Only: No 00:09:29.290 Volatile Memory Backup: OK 00:09:29.290 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.290 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.290 Available Spare: 0% 00:09:29.290 Available Spare Threshold: 0% 00:09:29.290 Life Percentage Used: 0% 00:09:29.290 Data Units Read: 2351 00:09:29.290 Data Units Written: 2032 00:09:29.290 Host Read Commands: 107155 00:09:29.290 Host Write Commands: 102925 00:09:29.290 Controller Busy Time: 0 minutes 00:09:29.290 Power Cycles: 0 00:09:29.290 Power On Hours: 0 hours 00:09:29.290 Unsafe Shutdowns: 0 00:09:29.290 Unrecoverable Media Errors: 0 00:09:29.290 Lifetime Error Log Entries: 0 00:09:29.290 Warning Temperature Time: 0 minutes 00:09:29.290 Critical Temperature Time: 0 minutes 00:09:29.290 00:09:29.290 Number of Queues 00:09:29.290 ================ 00:09:29.290 Number of I/O Submission Queues: 64 00:09:29.290 Number of I/O Completion Queues: 64 00:09:29.290 00:09:29.290 ZNS Specific Controller Data 00:09:29.290 ============================ 00:09:29.290 Zone Append Size Limit: 0 00:09:29.290 00:09:29.290 00:09:29.290 Active Namespaces 00:09:29.290 ================= 00:09:29.290 Namespace ID:1 00:09:29.290 Error Recovery Timeout: Unlimited 00:09:29.290 Command Set Identifier: NVM (00h) 00:09:29.290 Deallocate: Supported 00:09:29.290 Deallocated/Unwritten Error: Supported 00:09:29.290 Deallocated Read Value: All 0x00 00:09:29.290 Deallocate in Write Zeroes: Not Supported 00:09:29.290 Deallocated Guard Field: 0xFFFF 00:09:29.290 Flush: Supported 00:09:29.290 Reservation: Not Supported 00:09:29.290 Namespace Sharing Capabilities: Private 00:09:29.290 Size (in LBAs): 1048576 (4GiB) 00:09:29.290 Capacity (in LBAs): 1048576 (4GiB) 00:09:29.290 Utilization (in LBAs): 1048576 (4GiB) 00:09:29.290 Thin Provisioning: Not Supported 00:09:29.290 Per-NS Atomic Units: No 00:09:29.290 Maximum Single Source Range Length: 128 00:09:29.290 Maximum Copy Length: 128 00:09:29.290 Maximum Source Range Count: 128 00:09:29.290 NGUID/EUI64 Never Reused: No 00:09:29.290 Namespace Write Protected: No 00:09:29.290 Number of LBA Formats: 8 00:09:29.290 Current LBA Format: LBA Format #04 00:09:29.290 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.290 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.290 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.290 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.290 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.290 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.290 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.290 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.290 00:09:29.290 Namespace ID:2 00:09:29.290 Error Recovery Timeout: Unlimited 00:09:29.290 Command Set Identifier: NVM (00h) 00:09:29.290 Deallocate: Supported 00:09:29.290 Deallocated/Unwritten Error: Supported 00:09:29.290 Deallocated Read Value: All 0x00 00:09:29.290 Deallocate in Write Zeroes: Not Supported 00:09:29.290 Deallocated Guard Field: 0xFFFF 00:09:29.290 Flush: Supported 00:09:29.290 Reservation: Not Supported 00:09:29.290 Namespace Sharing Capabilities: Private 00:09:29.290 Size (in LBAs): 1048576 (4GiB) 00:09:29.290 Capacity (in LBAs): 1048576 (4GiB) 00:09:29.290 Utilization (in LBAs): 1048576 (4GiB) 00:09:29.290 Thin Provisioning: Not Supported 00:09:29.290 Per-NS Atomic Units: No 00:09:29.290 Maximum Single Source Range Length: 128 00:09:29.290 Maximum Copy Length: 128 00:09:29.290 Maximum Source Range Count: 128 00:09:29.290 NGUID/EUI64 Never Reused: No 00:09:29.290 Namespace Write Protected: No 00:09:29.290 Number of LBA Formats: 8 00:09:29.290 Current LBA Format: LBA Format #04 00:09:29.290 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.290 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.290 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.290 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.290 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.549 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.549 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.549 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.549 00:09:29.549 Namespace ID:3 00:09:29.549 Error Recovery Timeout: Unlimited 00:09:29.549 Command Set Identifier: NVM (00h) 00:09:29.549 Deallocate: Supported 00:09:29.549 Deallocated/Unwritten Error: Supported 00:09:29.549 Deallocated Read Value: All 0x00 00:09:29.549 Deallocate in Write Zeroes: Not Supported 00:09:29.549 Deallocated Guard Field: 0xFFFF 00:09:29.549 Flush: Supported 00:09:29.549 Reservation: Not Supported 00:09:29.549 Namespace Sharing Capabilities: Private 00:09:29.549 Size (in LBAs): 1048576 (4GiB) 00:09:29.549 Capacity (in LBAs): 1048576 (4GiB) 00:09:29.549 Utilization (in LBAs): 1048576 (4GiB) 00:09:29.549 Thin Provisioning: Not Supported 00:09:29.549 Per-NS Atomic Units: No 00:09:29.549 Maximum Single Source Range Length: 128 00:09:29.549 Maximum Copy Length: 128 00:09:29.549 Maximum Source Range Count: 128 00:09:29.549 NGUID/EUI64 Never Reused: No 00:09:29.549 Namespace Write Protected: No 00:09:29.549 Number of LBA Formats: 8 00:09:29.549 Current LBA Format: LBA Format #04 00:09:29.549 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.549 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.549 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.549 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.549 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.549 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.549 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.549 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.549 00:09:29.550 11:51:28 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:29.550 11:51:28 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:29.550 ===================================================== 00:09:29.550 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:29.550 ===================================================== 00:09:29.550 Controller Capabilities/Features 00:09:29.550 ================================ 00:09:29.550 Vendor ID: 1b36 00:09:29.550 Subsystem Vendor ID: 1af4 00:09:29.550 Serial Number: 12343 00:09:29.550 Model Number: QEMU NVMe Ctrl 00:09:29.550 Firmware Version: 8.0.0 00:09:29.550 Recommended Arb Burst: 6 00:09:29.550 IEEE OUI Identifier: 00 54 52 00:09:29.550 Multi-path I/O 00:09:29.550 May have multiple subsystem ports: No 00:09:29.550 May have multiple controllers: Yes 00:09:29.550 Associated with SR-IOV VF: No 00:09:29.550 Max Data Transfer Size: 524288 00:09:29.550 Max Number of Namespaces: 256 00:09:29.550 Max Number of I/O Queues: 64 00:09:29.550 NVMe Specification Version (VS): 1.4 00:09:29.550 NVMe Specification Version (Identify): 1.4 00:09:29.550 Maximum Queue Entries: 2048 00:09:29.550 Contiguous Queues Required: Yes 00:09:29.550 Arbitration Mechanisms Supported 00:09:29.550 Weighted Round Robin: Not Supported 00:09:29.550 Vendor Specific: Not Supported 00:09:29.550 Reset Timeout: 7500 ms 00:09:29.550 Doorbell Stride: 4 bytes 00:09:29.550 NVM Subsystem Reset: Not Supported 00:09:29.550 Command Sets Supported 00:09:29.550 NVM Command Set: Supported 00:09:29.550 Boot Partition: Not Supported 00:09:29.550 Memory Page Size Minimum: 4096 bytes 00:09:29.550 Memory Page Size Maximum: 65536 bytes 00:09:29.550 Persistent Memory Region: Not Supported 00:09:29.550 Optional Asynchronous Events Supported 00:09:29.550 Namespace Attribute Notices: Supported 00:09:29.550 Firmware Activation Notices: Not Supported 00:09:29.550 ANA Change Notices: Not Supported 00:09:29.550 PLE Aggregate Log Change Notices: Not Supported 00:09:29.550 LBA Status Info Alert Notices: Not Supported 00:09:29.550 EGE Aggregate Log Change Notices: Not Supported 00:09:29.550 Normal NVM Subsystem Shutdown event: Not Supported 00:09:29.550 Zone Descriptor Change Notices: Not Supported 00:09:29.550 Discovery Log Change Notices: Not Supported 00:09:29.550 Controller Attributes 00:09:29.550 128-bit Host Identifier: Not Supported 00:09:29.550 Non-Operational Permissive Mode: Not Supported 00:09:29.550 NVM Sets: Not Supported 00:09:29.550 Read Recovery Levels: Not Supported 00:09:29.550 Endurance Groups: Supported 00:09:29.550 Predictable Latency Mode: Not Supported 00:09:29.550 Traffic Based Keep ALive: Not Supported 00:09:29.550 Namespace Granularity: Not Supported 00:09:29.550 SQ Associations: Not Supported 00:09:29.550 UUID List: Not Supported 00:09:29.550 Multi-Domain Subsystem: Not Supported 00:09:29.550 Fixed Capacity Management: Not Supported 00:09:29.550 Variable Capacity Management: Not Supported 00:09:29.550 Delete Endurance Group: Not Supported 00:09:29.550 Delete NVM Set: Not Supported 00:09:29.550 Extended LBA Formats Supported: Supported 00:09:29.550 Flexible Data Placement Supported: Supported 00:09:29.550 00:09:29.550 Controller Memory Buffer Support 00:09:29.550 ================================ 00:09:29.550 Supported: No 00:09:29.550 00:09:29.550 Persistent Memory Region Support 00:09:29.550 ================================ 00:09:29.550 Supported: No 00:09:29.550 00:09:29.550 Admin Command Set Attributes 00:09:29.550 ============================ 00:09:29.550 Security Send/Receive: Not Supported 00:09:29.550 Format NVM: Supported 00:09:29.550 Firmware Activate/Download: Not Supported 00:09:29.550 Namespace Management: Supported 00:09:29.550 Device Self-Test: Not Supported 00:09:29.550 Directives: Supported 00:09:29.550 NVMe-MI: Not Supported 00:09:29.550 Virtualization Management: Not Supported 00:09:29.550 Doorbell Buffer Config: Supported 00:09:29.550 Get LBA Status Capability: Not Supported 00:09:29.550 Command & Feature Lockdown Capability: Not Supported 00:09:29.550 Abort Command Limit: 4 00:09:29.550 Async Event Request Limit: 4 00:09:29.550 Number of Firmware Slots: N/A 00:09:29.550 Firmware Slot 1 Read-Only: N/A 00:09:29.550 Firmware Activation Without Reset: N/A 00:09:29.550 Multiple Update Detection Support: N/A 00:09:29.550 Firmware Update Granularity: No Information Provided 00:09:29.550 Per-Namespace SMART Log: Yes 00:09:29.550 Asymmetric Namespace Access Log Page: Not Supported 00:09:29.550 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:29.550 Command Effects Log Page: Supported 00:09:29.550 Get Log Page Extended Data: Supported 00:09:29.550 Telemetry Log Pages: Not Supported 00:09:29.550 Persistent Event Log Pages: Not Supported 00:09:29.550 Supported Log Pages Log Page: May Support 00:09:29.550 Commands Supported & Effects Log Page: Not Supported 00:09:29.550 Feature Identifiers & Effects Log Page:May Support 00:09:29.550 NVMe-MI Commands & Effects Log Page: May Support 00:09:29.550 Data Area 4 for Telemetry Log: Not Supported 00:09:29.550 Error Log Page Entries Supported: 1 00:09:29.550 Keep Alive: Not Supported 00:09:29.550 00:09:29.550 NVM Command Set Attributes 00:09:29.550 ========================== 00:09:29.550 Submission Queue Entry Size 00:09:29.550 Max: 64 00:09:29.550 Min: 64 00:09:29.550 Completion Queue Entry Size 00:09:29.550 Max: 16 00:09:29.550 Min: 16 00:09:29.550 Number of Namespaces: 256 00:09:29.550 Compare Command: Supported 00:09:29.550 Write Uncorrectable Command: Not Supported 00:09:29.550 Dataset Management Command: Supported 00:09:29.550 Write Zeroes Command: Supported 00:09:29.550 Set Features Save Field: Supported 00:09:29.550 Reservations: Not Supported 00:09:29.550 Timestamp: Supported 00:09:29.550 Copy: Supported 00:09:29.550 Volatile Write Cache: Present 00:09:29.550 Atomic Write Unit (Normal): 1 00:09:29.550 Atomic Write Unit (PFail): 1 00:09:29.550 Atomic Compare & Write Unit: 1 00:09:29.550 Fused Compare & Write: Not Supported 00:09:29.550 Scatter-Gather List 00:09:29.550 SGL Command Set: Supported 00:09:29.550 SGL Keyed: Not Supported 00:09:29.550 SGL Bit Bucket Descriptor: Not Supported 00:09:29.550 SGL Metadata Pointer: Not Supported 00:09:29.550 Oversized SGL: Not Supported 00:09:29.550 SGL Metadata Address: Not Supported 00:09:29.550 SGL Offset: Not Supported 00:09:29.550 Transport SGL Data Block: Not Supported 00:09:29.550 Replay Protected Memory Block: Not Supported 00:09:29.550 00:09:29.550 Firmware Slot Information 00:09:29.550 ========================= 00:09:29.550 Active slot: 1 00:09:29.550 Slot 1 Firmware Revision: 1.0 00:09:29.550 00:09:29.550 00:09:29.550 Commands Supported and Effects 00:09:29.550 ============================== 00:09:29.550 Admin Commands 00:09:29.550 -------------- 00:09:29.550 Delete I/O Submission Queue (00h): Supported 00:09:29.550 Create I/O Submission Queue (01h): Supported 00:09:29.550 Get Log Page (02h): Supported 00:09:29.550 Delete I/O Completion Queue (04h): Supported 00:09:29.550 Create I/O Completion Queue (05h): Supported 00:09:29.550 Identify (06h): Supported 00:09:29.550 Abort (08h): Supported 00:09:29.550 Set Features (09h): Supported 00:09:29.550 Get Features (0Ah): Supported 00:09:29.550 Asynchronous Event Request (0Ch): Supported 00:09:29.550 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:29.550 Directive Send (19h): Supported 00:09:29.550 Directive Receive (1Ah): Supported 00:09:29.550 Virtualization Management (1Ch): Supported 00:09:29.550 Doorbell Buffer Config (7Ch): Supported 00:09:29.550 Format NVM (80h): Supported LBA-Change 00:09:29.550 I/O Commands 00:09:29.550 ------------ 00:09:29.550 Flush (00h): Supported LBA-Change 00:09:29.550 Write (01h): Supported LBA-Change 00:09:29.550 Read (02h): Supported 00:09:29.550 Compare (05h): Supported 00:09:29.550 Write Zeroes (08h): Supported LBA-Change 00:09:29.550 Dataset Management (09h): Supported LBA-Change 00:09:29.550 Unknown (0Ch): Supported 00:09:29.550 Unknown (12h): Supported 00:09:29.550 Copy (19h): Supported LBA-Change 00:09:29.550 Unknown (1Dh): Supported LBA-Change 00:09:29.550 00:09:29.550 Error Log 00:09:29.550 ========= 00:09:29.550 00:09:29.550 Arbitration 00:09:29.550 =========== 00:09:29.550 Arbitration Burst: no limit 00:09:29.550 00:09:29.550 Power Management 00:09:29.550 ================ 00:09:29.550 Number of Power States: 1 00:09:29.550 Current Power State: Power State #0 00:09:29.550 Power State #0: 00:09:29.550 Max Power: 25.00 W 00:09:29.550 Non-Operational State: Operational 00:09:29.550 Entry Latency: 16 microseconds 00:09:29.550 Exit Latency: 4 microseconds 00:09:29.550 Relative Read Throughput: 0 00:09:29.550 Relative Read Latency: 0 00:09:29.550 Relative Write Throughput: 0 00:09:29.550 Relative Write Latency: 0 00:09:29.550 Idle Power: Not Reported 00:09:29.550 Active Power: Not Reported 00:09:29.550 Non-Operational Permissive Mode: Not Supported 00:09:29.550 00:09:29.550 Health Information 00:09:29.551 ================== 00:09:29.551 Critical Warnings: 00:09:29.551 Available Spare Space: OK 00:09:29.551 Temperature: OK 00:09:29.551 Device Reliability: OK 00:09:29.551 Read Only: No 00:09:29.551 Volatile Memory Backup: OK 00:09:29.551 Current Temperature: 323 Kelvin (50 Celsius) 00:09:29.551 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:29.551 Available Spare: 0% 00:09:29.551 Available Spare Threshold: 0% 00:09:29.551 Life Percentage Used: 0% 00:09:29.551 Data Units Read: 859 00:09:29.551 Data Units Written: 752 00:09:29.551 Host Read Commands: 36440 00:09:29.551 Host Write Commands: 35030 00:09:29.551 Controller Busy Time: 0 minutes 00:09:29.551 Power Cycles: 0 00:09:29.551 Power On Hours: 0 hours 00:09:29.551 Unsafe Shutdowns: 0 00:09:29.551 Unrecoverable Media Errors: 0 00:09:29.551 Lifetime Error Log Entries: 0 00:09:29.551 Warning Temperature Time: 0 minutes 00:09:29.551 Critical Temperature Time: 0 minutes 00:09:29.551 00:09:29.551 Number of Queues 00:09:29.551 ================ 00:09:29.551 Number of I/O Submission Queues: 64 00:09:29.551 Number of I/O Completion Queues: 64 00:09:29.551 00:09:29.551 ZNS Specific Controller Data 00:09:29.551 ============================ 00:09:29.551 Zone Append Size Limit: 0 00:09:29.551 00:09:29.551 00:09:29.551 Active Namespaces 00:09:29.551 ================= 00:09:29.551 Namespace ID:1 00:09:29.551 Error Recovery Timeout: Unlimited 00:09:29.551 Command Set Identifier: NVM (00h) 00:09:29.551 Deallocate: Supported 00:09:29.551 Deallocated/Unwritten Error: Supported 00:09:29.551 Deallocated Read Value: All 0x00 00:09:29.551 Deallocate in Write Zeroes: Not Supported 00:09:29.551 Deallocated Guard Field: 0xFFFF 00:09:29.551 Flush: Supported 00:09:29.551 Reservation: Not Supported 00:09:29.551 Namespace Sharing Capabilities: Multiple Controllers 00:09:29.551 Size (in LBAs): 262144 (1GiB) 00:09:29.551 Capacity (in LBAs): 262144 (1GiB) 00:09:29.551 Utilization (in LBAs): 262144 (1GiB) 00:09:29.551 Thin Provisioning: Not Supported 00:09:29.551 Per-NS Atomic Units: No 00:09:29.551 Maximum Single Source Range Length: 128 00:09:29.551 Maximum Copy Length: 128 00:09:29.551 Maximum Source Range Count: 128 00:09:29.551 NGUID/EUI64 Never Reused: No 00:09:29.551 Namespace Write Protected: No 00:09:29.551 Endurance group ID: 1 00:09:29.551 Number of LBA Formats: 8 00:09:29.551 Current LBA Format: LBA Format #04 00:09:29.551 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:29.551 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:29.551 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:29.551 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:29.551 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:29.551 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:29.551 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:29.551 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:29.551 00:09:29.551 Get Feature FDP: 00:09:29.551 ================ 00:09:29.551 Enabled: Yes 00:09:29.551 FDP configuration index: 0 00:09:29.551 00:09:29.551 FDP configurations log page 00:09:29.551 =========================== 00:09:29.551 Number of FDP configurations: 1 00:09:29.551 Version: 0 00:09:29.551 Size: 112 00:09:29.551 FDP Configuration Descriptor: 0 00:09:29.551 Descriptor Size: 96 00:09:29.551 Reclaim Group Identifier format: 2 00:09:29.551 FDP Volatile Write Cache: Not Present 00:09:29.551 FDP Configuration: Valid 00:09:29.551 Vendor Specific Size: 0 00:09:29.551 Number of Reclaim Groups: 2 00:09:29.551 Number of Recalim Unit Handles: 8 00:09:29.551 Max Placement Identifiers: 128 00:09:29.551 Number of Namespaces Suppprted: 256 00:09:29.551 Reclaim unit Nominal Size: 6000000 bytes 00:09:29.551 Estimated Reclaim Unit Time Limit: Not Reported 00:09:29.551 RUH Desc #000: RUH Type: Initially Isolated 00:09:29.551 RUH Desc #001: RUH Type: Initially Isolated 00:09:29.551 RUH Desc #002: RUH Type: Initially Isolated 00:09:29.551 RUH Desc #003: RUH Type: Initially Isolated 00:09:29.551 RUH Desc #004: RUH Type: Initially Isolated 00:09:29.551 RUH Desc #005: RUH Type: Initially Isolated 00:09:29.551 RUH Desc #006: RUH Type: Initially Isolated 00:09:29.551 RUH Desc #007: RUH Type: Initially Isolated 00:09:29.551 00:09:29.551 FDP reclaim unit handle usage log page 00:09:29.811 ====================================== 00:09:29.811 Number of Reclaim Unit Handles: 8 00:09:29.811 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:29.811 RUH Usage Desc #001: RUH Attributes: Unused 00:09:29.811 RUH Usage Desc #002: RUH Attributes: Unused 00:09:29.811 RUH Usage Desc #003: RUH Attributes: Unused 00:09:29.811 RUH Usage Desc #004: RUH Attributes: Unused 00:09:29.811 RUH Usage Desc #005: RUH Attributes: Unused 00:09:29.811 RUH Usage Desc #006: RUH Attributes: Unused 00:09:29.811 RUH Usage Desc #007: RUH Attributes: Unused 00:09:29.811 00:09:29.811 FDP statistics log page 00:09:29.811 ======================= 00:09:29.811 Host bytes with metadata written: 480747520 00:09:29.811 Media bytes with metadata written: 480800768 00:09:29.811 Media bytes erased: 0 00:09:29.811 00:09:29.811 FDP events log page 00:09:29.811 =================== 00:09:29.811 Number of FDP events: 0 00:09:29.811 00:09:29.811 00:09:29.811 real 0m1.433s 00:09:29.811 user 0m0.526s 00:09:29.811 sys 0m0.721s 00:09:29.811 11:51:28 nvme.nvme_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:29.811 11:51:28 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:29.811 ************************************ 00:09:29.811 END TEST nvme_identify 00:09:29.811 ************************************ 00:09:29.811 11:51:28 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:29.811 11:51:28 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:29.811 11:51:28 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:29.811 11:51:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:29.811 ************************************ 00:09:29.811 START TEST nvme_perf 00:09:29.811 ************************************ 00:09:29.811 11:51:28 nvme.nvme_perf -- common/autotest_common.sh@1121 -- # nvme_perf 00:09:29.811 11:51:28 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:31.189 Initializing NVMe Controllers 00:09:31.189 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:31.189 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:31.189 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:31.189 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:31.189 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:31.189 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:31.189 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:31.189 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:31.189 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:31.189 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:31.189 Initialization complete. Launching workers. 00:09:31.189 ======================================================== 00:09:31.189 Latency(us) 00:09:31.189 Device Information : IOPS MiB/s Average min max 00:09:31.189 PCIE (0000:00:10.0) NSID 1 from core 0: 15101.31 176.97 8479.28 5940.60 33035.07 00:09:31.189 PCIE (0000:00:11.0) NSID 1 from core 0: 15101.31 176.97 8472.06 5777.25 32493.54 00:09:31.189 PCIE (0000:00:13.0) NSID 1 from core 0: 15101.31 176.97 8462.96 5039.57 32644.56 00:09:31.189 PCIE (0000:00:12.0) NSID 1 from core 0: 15101.31 176.97 8453.68 4802.20 32119.39 00:09:31.189 PCIE (0000:00:12.0) NSID 2 from core 0: 15101.31 176.97 8444.50 4439.70 31644.52 00:09:31.189 PCIE (0000:00:12.0) NSID 3 from core 0: 15101.31 176.97 8434.97 4134.75 31154.67 00:09:31.189 ======================================================== 00:09:31.189 Total : 90607.87 1061.81 8457.91 4134.75 33035.07 00:09:31.189 00:09:31.189 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:31.189 ================================================================================= 00:09:31.189 1.00000% : 7097.348us 00:09:31.189 10.00000% : 7440.769us 00:09:31.189 25.00000% : 7669.715us 00:09:31.189 50.00000% : 8013.135us 00:09:31.189 75.00000% : 8356.555us 00:09:31.189 90.00000% : 9329.579us 00:09:31.189 95.00000% : 12134.176us 00:09:31.189 98.00000% : 15339.431us 00:09:31.189 99.00000% : 18659.158us 00:09:31.189 99.50000% : 23810.459us 00:09:31.189 99.90000% : 32739.382us 00:09:31.189 99.99000% : 32968.328us 00:09:31.189 99.99900% : 33197.275us 00:09:31.189 99.99990% : 33197.275us 00:09:31.189 99.99999% : 33197.275us 00:09:31.189 00:09:31.189 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:31.189 ================================================================================= 00:09:31.189 1.00000% : 7183.203us 00:09:31.189 10.00000% : 7498.005us 00:09:31.189 25.00000% : 7669.715us 00:09:31.189 50.00000% : 7955.899us 00:09:31.189 75.00000% : 8299.319us 00:09:31.189 90.00000% : 9215.106us 00:09:31.189 95.00000% : 11962.466us 00:09:31.189 98.00000% : 15110.484us 00:09:31.189 99.00000% : 18315.738us 00:09:31.189 99.50000% : 24497.300us 00:09:31.189 99.90000% : 32281.488us 00:09:31.189 99.99000% : 32510.435us 00:09:31.189 99.99900% : 32510.435us 00:09:31.189 99.99990% : 32510.435us 00:09:31.189 99.99999% : 32510.435us 00:09:31.189 00:09:31.189 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:31.189 ================================================================================= 00:09:31.189 1.00000% : 7183.203us 00:09:31.189 10.00000% : 7440.769us 00:09:31.189 25.00000% : 7669.715us 00:09:31.189 50.00000% : 7955.899us 00:09:31.189 75.00000% : 8299.319us 00:09:31.189 90.00000% : 9157.869us 00:09:31.189 95.00000% : 11847.993us 00:09:31.189 98.00000% : 15453.904us 00:09:31.189 99.00000% : 17972.318us 00:09:31.189 99.50000% : 25184.140us 00:09:31.189 99.90000% : 32510.435us 00:09:31.189 99.99000% : 32739.382us 00:09:31.189 99.99900% : 32739.382us 00:09:31.189 99.99990% : 32739.382us 00:09:31.189 99.99999% : 32739.382us 00:09:31.189 00:09:31.189 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:31.189 ================================================================================= 00:09:31.189 1.00000% : 7154.585us 00:09:31.189 10.00000% : 7440.769us 00:09:31.189 25.00000% : 7669.715us 00:09:31.189 50.00000% : 7955.899us 00:09:31.189 75.00000% : 8299.319us 00:09:31.189 90.00000% : 9100.632us 00:09:31.189 95.00000% : 12076.940us 00:09:31.189 98.00000% : 16026.271us 00:09:31.189 99.00000% : 17399.951us 00:09:31.189 99.50000% : 24840.720us 00:09:31.189 99.90000% : 31823.595us 00:09:31.189 99.99000% : 32281.488us 00:09:31.189 99.99900% : 32281.488us 00:09:31.189 99.99990% : 32281.488us 00:09:31.189 99.99999% : 32281.488us 00:09:31.189 00:09:31.189 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:31.189 ================================================================================= 00:09:31.189 1.00000% : 7125.967us 00:09:31.189 10.00000% : 7440.769us 00:09:31.189 25.00000% : 7669.715us 00:09:31.189 50.00000% : 7955.899us 00:09:31.189 75.00000% : 8299.319us 00:09:31.189 90.00000% : 9043.396us 00:09:31.189 95.00000% : 11847.993us 00:09:31.189 98.00000% : 16140.744us 00:09:31.189 99.00000% : 17399.951us 00:09:31.189 99.50000% : 24382.826us 00:09:31.189 99.90000% : 31365.701us 00:09:31.189 99.99000% : 31823.595us 00:09:31.189 99.99900% : 31823.595us 00:09:31.189 99.99990% : 31823.595us 00:09:31.189 99.99999% : 31823.595us 00:09:31.189 00:09:31.189 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:31.189 ================================================================================= 00:09:31.189 1.00000% : 7125.967us 00:09:31.189 10.00000% : 7498.005us 00:09:31.189 25.00000% : 7669.715us 00:09:31.189 50.00000% : 7955.899us 00:09:31.189 75.00000% : 8299.319us 00:09:31.189 90.00000% : 9157.869us 00:09:31.189 95.00000% : 11676.283us 00:09:31.189 98.00000% : 15682.851us 00:09:31.189 99.00000% : 18086.791us 00:09:31.189 99.50000% : 23924.933us 00:09:31.189 99.90000% : 30907.808us 00:09:31.189 99.99000% : 31136.755us 00:09:31.189 99.99900% : 31365.701us 00:09:31.189 99.99990% : 31365.701us 00:09:31.189 99.99999% : 31365.701us 00:09:31.189 00:09:31.189 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:31.189 ============================================================================== 00:09:31.189 Range in us Cumulative IO count 00:09:31.189 5923.997 - 5952.615: 0.0132% ( 2) 00:09:31.189 5952.615 - 5981.233: 0.0397% ( 4) 00:09:31.189 5981.233 - 6009.852: 0.0530% ( 2) 00:09:31.189 6009.852 - 6038.470: 0.0596% ( 1) 00:09:31.189 6038.470 - 6067.088: 0.0794% ( 3) 00:09:31.189 6067.088 - 6095.707: 0.0861% ( 1) 00:09:31.189 6095.707 - 6124.325: 0.0993% ( 2) 00:09:31.189 6124.325 - 6152.943: 0.1126% ( 2) 00:09:31.189 6152.943 - 6181.562: 0.1258% ( 2) 00:09:31.189 6181.562 - 6210.180: 0.1390% ( 2) 00:09:31.189 6210.180 - 6238.798: 0.1523% ( 2) 00:09:31.189 6238.798 - 6267.417: 0.1655% ( 2) 00:09:31.189 6267.417 - 6296.035: 0.1788% ( 2) 00:09:31.189 6296.035 - 6324.653: 0.1920% ( 2) 00:09:31.189 6324.653 - 6353.272: 0.2119% ( 3) 00:09:31.190 6353.272 - 6381.890: 0.2185% ( 1) 00:09:31.190 6381.890 - 6410.508: 0.2317% ( 2) 00:09:31.190 6410.508 - 6439.127: 0.2450% ( 2) 00:09:31.190 6439.127 - 6467.745: 0.2648% ( 3) 00:09:31.190 6467.745 - 6496.363: 0.2715% ( 1) 00:09:31.190 6496.363 - 6524.982: 0.2913% ( 3) 00:09:31.190 6524.982 - 6553.600: 0.3046% ( 2) 00:09:31.190 6553.600 - 6582.218: 0.3178% ( 2) 00:09:31.190 6582.218 - 6610.837: 0.3310% ( 2) 00:09:31.190 6610.837 - 6639.455: 0.3443% ( 2) 00:09:31.190 6639.455 - 6668.073: 0.3509% ( 1) 00:09:31.190 6668.073 - 6696.692: 0.3774% ( 4) 00:09:31.190 6696.692 - 6725.310: 0.3840% ( 1) 00:09:31.190 6725.310 - 6753.928: 0.3972% ( 2) 00:09:31.190 6753.928 - 6782.547: 0.4039% ( 1) 00:09:31.190 6782.547 - 6811.165: 0.4237% ( 3) 00:09:31.190 6925.638 - 6954.257: 0.4303% ( 1) 00:09:31.190 6954.257 - 6982.875: 0.4833% ( 8) 00:09:31.190 6982.875 - 7011.493: 0.5760% ( 14) 00:09:31.190 7011.493 - 7040.112: 0.7150% ( 21) 00:09:31.190 7040.112 - 7068.730: 0.9401% ( 34) 00:09:31.190 7068.730 - 7097.348: 1.2248% ( 43) 00:09:31.190 7097.348 - 7125.967: 1.5824% ( 54) 00:09:31.190 7125.967 - 7154.585: 2.0392% ( 69) 00:09:31.190 7154.585 - 7183.203: 2.6549% ( 93) 00:09:31.190 7183.203 - 7211.822: 3.4296% ( 117) 00:09:31.190 7211.822 - 7240.440: 4.2505% ( 124) 00:09:31.190 7240.440 - 7269.059: 5.1443% ( 135) 00:09:31.190 7269.059 - 7297.677: 6.2434% ( 166) 00:09:31.190 7297.677 - 7326.295: 7.3226% ( 163) 00:09:31.190 7326.295 - 7383.532: 9.9510% ( 397) 00:09:31.190 7383.532 - 7440.769: 13.1091% ( 477) 00:09:31.190 7440.769 - 7498.005: 16.7770% ( 554) 00:09:31.190 7498.005 - 7555.242: 20.2595% ( 526) 00:09:31.190 7555.242 - 7612.479: 24.0863% ( 578) 00:09:31.190 7612.479 - 7669.715: 27.9330% ( 581) 00:09:31.190 7669.715 - 7726.952: 31.8591% ( 593) 00:09:31.190 7726.952 - 7784.189: 36.1229% ( 644) 00:09:31.190 7784.189 - 7841.425: 40.4131% ( 648) 00:09:31.190 7841.425 - 7898.662: 44.7166% ( 650) 00:09:31.190 7898.662 - 7955.899: 49.0797% ( 659) 00:09:31.190 7955.899 - 8013.135: 53.6083% ( 684) 00:09:31.190 8013.135 - 8070.372: 58.0310% ( 668) 00:09:31.190 8070.372 - 8127.609: 62.3411% ( 651) 00:09:31.190 8127.609 - 8184.845: 66.3334% ( 603) 00:09:31.190 8184.845 - 8242.082: 70.1801% ( 581) 00:09:31.190 8242.082 - 8299.319: 73.4905% ( 500) 00:09:31.190 8299.319 - 8356.555: 76.3705% ( 435) 00:09:31.190 8356.555 - 8413.792: 78.9062% ( 383) 00:09:31.190 8413.792 - 8471.029: 81.0448% ( 323) 00:09:31.190 8471.029 - 8528.266: 82.6536% ( 243) 00:09:31.190 8528.266 - 8585.502: 84.1698% ( 229) 00:09:31.190 8585.502 - 8642.739: 85.2688% ( 166) 00:09:31.190 8642.739 - 8699.976: 86.1758% ( 137) 00:09:31.190 8699.976 - 8757.212: 86.9770% ( 121) 00:09:31.190 8757.212 - 8814.449: 87.5331% ( 84) 00:09:31.190 8814.449 - 8871.686: 87.9436% ( 62) 00:09:31.190 8871.686 - 8928.922: 88.3673% ( 64) 00:09:31.190 8928.922 - 8986.159: 88.7182% ( 53) 00:09:31.190 8986.159 - 9043.396: 89.0757% ( 54) 00:09:31.190 9043.396 - 9100.632: 89.3406% ( 40) 00:09:31.190 9100.632 - 9157.869: 89.5392% ( 30) 00:09:31.190 9157.869 - 9215.106: 89.7511% ( 32) 00:09:31.190 9215.106 - 9272.342: 89.9828% ( 35) 00:09:31.190 9272.342 - 9329.579: 90.1682% ( 28) 00:09:31.190 9329.579 - 9386.816: 90.3867% ( 33) 00:09:31.190 9386.816 - 9444.052: 90.6118% ( 34) 00:09:31.190 9444.052 - 9501.289: 90.7971% ( 28) 00:09:31.190 9501.289 - 9558.526: 91.0156% ( 33) 00:09:31.190 9558.526 - 9615.762: 91.1745% ( 24) 00:09:31.190 9615.762 - 9672.999: 91.3798% ( 31) 00:09:31.190 9672.999 - 9730.236: 91.5387% ( 24) 00:09:31.190 9730.236 - 9787.472: 91.6578% ( 18) 00:09:31.190 9787.472 - 9844.709: 91.7969% ( 21) 00:09:31.190 9844.709 - 9901.946: 91.9690% ( 26) 00:09:31.190 9901.946 - 9959.183: 92.1081% ( 21) 00:09:31.190 9959.183 - 10016.419: 92.2736% ( 25) 00:09:31.190 10016.419 - 10073.656: 92.4126% ( 21) 00:09:31.190 10073.656 - 10130.893: 92.5516% ( 21) 00:09:31.190 10130.893 - 10188.129: 92.6973% ( 22) 00:09:31.190 10188.129 - 10245.366: 92.8297% ( 20) 00:09:31.190 10245.366 - 10302.603: 92.9555% ( 19) 00:09:31.190 10302.603 - 10359.839: 93.0747% ( 18) 00:09:31.190 10359.839 - 10417.076: 93.2005% ( 19) 00:09:31.190 10417.076 - 10474.313: 93.2998% ( 15) 00:09:31.190 10474.313 - 10531.549: 93.4190% ( 18) 00:09:31.190 10531.549 - 10588.786: 93.5315% ( 17) 00:09:31.190 10588.786 - 10646.023: 93.5977% ( 10) 00:09:31.190 10646.023 - 10703.259: 93.6970% ( 15) 00:09:31.190 10703.259 - 10760.496: 93.7699% ( 11) 00:09:31.190 10760.496 - 10817.733: 93.8361% ( 10) 00:09:31.190 10817.733 - 10874.969: 93.9023% ( 10) 00:09:31.190 10874.969 - 10932.206: 93.9619% ( 9) 00:09:31.190 10932.206 - 10989.443: 94.0215% ( 9) 00:09:31.190 10989.443 - 11046.679: 94.0744% ( 8) 00:09:31.190 11046.679 - 11103.916: 94.1406% ( 10) 00:09:31.190 11103.916 - 11161.153: 94.1870% ( 7) 00:09:31.190 11161.153 - 11218.390: 94.2267% ( 6) 00:09:31.190 11218.390 - 11275.626: 94.3061% ( 12) 00:09:31.190 11275.626 - 11332.863: 94.3657% ( 9) 00:09:31.190 11332.863 - 11390.100: 94.4121% ( 7) 00:09:31.190 11390.100 - 11447.336: 94.4584% ( 7) 00:09:31.190 11447.336 - 11504.573: 94.4849% ( 4) 00:09:31.190 11504.573 - 11561.810: 94.5379% ( 8) 00:09:31.190 11561.810 - 11619.046: 94.5842% ( 7) 00:09:31.190 11619.046 - 11676.283: 94.6107% ( 4) 00:09:31.190 11676.283 - 11733.520: 94.6504% ( 6) 00:09:31.190 11733.520 - 11790.756: 94.7034% ( 8) 00:09:31.190 11790.756 - 11847.993: 94.7365% ( 5) 00:09:31.190 11847.993 - 11905.230: 94.8159% ( 12) 00:09:31.190 11905.230 - 11962.466: 94.8623% ( 7) 00:09:31.190 11962.466 - 12019.703: 94.9153% ( 8) 00:09:31.190 12019.703 - 12076.940: 94.9748% ( 9) 00:09:31.190 12076.940 - 12134.176: 95.0212% ( 7) 00:09:31.190 12134.176 - 12191.413: 95.0543% ( 5) 00:09:31.190 12191.413 - 12248.650: 95.1006% ( 7) 00:09:31.190 12248.650 - 12305.886: 95.1271% ( 4) 00:09:31.190 12305.886 - 12363.123: 95.1536% ( 4) 00:09:31.190 12363.123 - 12420.360: 95.1735% ( 3) 00:09:31.190 12420.360 - 12477.597: 95.1999% ( 4) 00:09:31.190 12477.597 - 12534.833: 95.2463% ( 7) 00:09:31.190 12534.833 - 12592.070: 95.2794% ( 5) 00:09:31.190 12592.070 - 12649.307: 95.3390% ( 9) 00:09:31.190 12649.307 - 12706.543: 95.3787% ( 6) 00:09:31.190 12706.543 - 12763.780: 95.4184% ( 6) 00:09:31.190 12763.780 - 12821.017: 95.4582% ( 6) 00:09:31.190 12821.017 - 12878.253: 95.4979% ( 6) 00:09:31.190 12878.253 - 12935.490: 95.5310% ( 5) 00:09:31.190 12935.490 - 12992.727: 95.5773% ( 7) 00:09:31.190 12992.727 - 13049.963: 95.6369% ( 9) 00:09:31.190 13049.963 - 13107.200: 95.6766% ( 6) 00:09:31.190 13107.200 - 13164.437: 95.7164% ( 6) 00:09:31.190 13164.437 - 13221.673: 95.7760% ( 9) 00:09:31.190 13221.673 - 13278.910: 95.8157% ( 6) 00:09:31.190 13278.910 - 13336.147: 95.8620% ( 7) 00:09:31.190 13336.147 - 13393.383: 95.8885% ( 4) 00:09:31.190 13393.383 - 13450.620: 95.9282% ( 6) 00:09:31.190 13450.620 - 13507.857: 95.9415% ( 2) 00:09:31.190 13507.857 - 13565.093: 96.0143% ( 11) 00:09:31.190 13565.093 - 13622.330: 96.0606% ( 7) 00:09:31.190 13622.330 - 13679.567: 96.1269% ( 10) 00:09:31.190 13679.567 - 13736.803: 96.2063% ( 12) 00:09:31.190 13736.803 - 13794.040: 96.2659% ( 9) 00:09:31.190 13794.040 - 13851.277: 96.3189% ( 8) 00:09:31.190 13851.277 - 13908.514: 96.3652% ( 7) 00:09:31.190 13908.514 - 13965.750: 96.4380% ( 11) 00:09:31.190 13965.750 - 14022.987: 96.5109% ( 11) 00:09:31.190 14022.987 - 14080.224: 96.5837% ( 11) 00:09:31.190 14080.224 - 14137.460: 96.6631% ( 12) 00:09:31.190 14137.460 - 14194.697: 96.7360% ( 11) 00:09:31.190 14194.697 - 14251.934: 96.8154% ( 12) 00:09:31.190 14251.934 - 14309.170: 96.8882% ( 11) 00:09:31.190 14309.170 - 14366.407: 96.9412% ( 8) 00:09:31.190 14366.407 - 14423.644: 97.0140% ( 11) 00:09:31.190 14423.644 - 14480.880: 97.0736% ( 9) 00:09:31.190 14480.880 - 14538.117: 97.1266% ( 8) 00:09:31.190 14538.117 - 14595.354: 97.2127% ( 13) 00:09:31.190 14595.354 - 14652.590: 97.2524% ( 6) 00:09:31.190 14652.590 - 14767.064: 97.3980% ( 22) 00:09:31.190 14767.064 - 14881.537: 97.5238% ( 19) 00:09:31.190 14881.537 - 14996.010: 97.6496% ( 19) 00:09:31.190 14996.010 - 15110.484: 97.7953% ( 22) 00:09:31.190 15110.484 - 15224.957: 97.9211% ( 19) 00:09:31.190 15224.957 - 15339.431: 98.0204% ( 15) 00:09:31.190 15339.431 - 15453.904: 98.0998% ( 12) 00:09:31.190 15453.904 - 15568.377: 98.1793% ( 12) 00:09:31.190 15568.377 - 15682.851: 98.2786% ( 15) 00:09:31.190 15682.851 - 15797.324: 98.3779% ( 15) 00:09:31.190 15797.324 - 15911.797: 98.4640% ( 13) 00:09:31.190 15911.797 - 16026.271: 98.5434% ( 12) 00:09:31.190 16026.271 - 16140.744: 98.6295% ( 13) 00:09:31.190 16140.744 - 16255.217: 98.6626% ( 5) 00:09:31.190 16255.217 - 16369.691: 98.6891% ( 4) 00:09:31.190 16369.691 - 16484.164: 98.7023% ( 2) 00:09:31.190 16484.164 - 16598.638: 98.7156% ( 2) 00:09:31.190 16598.638 - 16713.111: 98.7288% ( 2) 00:09:31.190 17514.424 - 17628.898: 98.7354% ( 1) 00:09:31.190 17628.898 - 17743.371: 98.7685% ( 5) 00:09:31.190 17743.371 - 17857.845: 98.7950% ( 4) 00:09:31.190 17857.845 - 17972.318: 98.8215% ( 4) 00:09:31.190 17972.318 - 18086.791: 98.8546% ( 5) 00:09:31.190 18086.791 - 18201.265: 98.8745% ( 3) 00:09:31.190 18201.265 - 18315.738: 98.9076% ( 5) 00:09:31.190 18315.738 - 18430.211: 98.9407% ( 5) 00:09:31.190 18430.211 - 18544.685: 98.9672% ( 4) 00:09:31.190 18544.685 - 18659.158: 99.0003% ( 5) 00:09:31.190 18659.158 - 18773.631: 99.0267% ( 4) 00:09:31.190 18773.631 - 18888.105: 99.0599% ( 5) 00:09:31.190 18888.105 - 19002.578: 99.0863% ( 4) 00:09:31.190 19002.578 - 19117.052: 99.1194% ( 5) 00:09:31.190 19117.052 - 19231.525: 99.1525% ( 5) 00:09:31.190 22436.779 - 22551.252: 99.1790% ( 4) 00:09:31.190 22551.252 - 22665.726: 99.2121% ( 5) 00:09:31.190 22665.726 - 22780.199: 99.2386% ( 4) 00:09:31.190 22780.199 - 22894.672: 99.2717% ( 5) 00:09:31.190 22894.672 - 23009.146: 99.3048% ( 5) 00:09:31.190 23009.146 - 23123.619: 99.3379% ( 5) 00:09:31.190 23123.619 - 23238.093: 99.3710% ( 5) 00:09:31.190 23238.093 - 23352.566: 99.3975% ( 4) 00:09:31.191 23352.566 - 23467.039: 99.4240% ( 4) 00:09:31.191 23467.039 - 23581.513: 99.4571% ( 5) 00:09:31.191 23581.513 - 23695.986: 99.4902% ( 5) 00:09:31.191 23695.986 - 23810.459: 99.5233% ( 5) 00:09:31.191 23810.459 - 23924.933: 99.5498% ( 4) 00:09:31.191 23924.933 - 24039.406: 99.5763% ( 4) 00:09:31.191 31136.755 - 31365.701: 99.5961% ( 3) 00:09:31.191 31365.701 - 31594.648: 99.6557% ( 9) 00:09:31.191 31594.648 - 31823.595: 99.7087% ( 8) 00:09:31.191 31823.595 - 32052.541: 99.7617% ( 8) 00:09:31.191 32052.541 - 32281.488: 99.8146% ( 8) 00:09:31.191 32281.488 - 32510.435: 99.8742% ( 9) 00:09:31.191 32510.435 - 32739.382: 99.9338% ( 9) 00:09:31.191 32739.382 - 32968.328: 99.9934% ( 9) 00:09:31.191 32968.328 - 33197.275: 100.0000% ( 1) 00:09:31.191 00:09:31.191 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:31.191 ============================================================================== 00:09:31.191 Range in us Cumulative IO count 00:09:31.191 5752.286 - 5780.905: 0.0066% ( 1) 00:09:31.191 5780.905 - 5809.523: 0.0331% ( 4) 00:09:31.191 5809.523 - 5838.141: 0.0530% ( 3) 00:09:31.191 5838.141 - 5866.760: 0.0662% ( 2) 00:09:31.191 5866.760 - 5895.378: 0.0794% ( 2) 00:09:31.191 5895.378 - 5923.997: 0.0927% ( 2) 00:09:31.191 5923.997 - 5952.615: 0.1059% ( 2) 00:09:31.191 5952.615 - 5981.233: 0.1258% ( 3) 00:09:31.191 5981.233 - 6009.852: 0.1390% ( 2) 00:09:31.191 6009.852 - 6038.470: 0.1457% ( 1) 00:09:31.191 6038.470 - 6067.088: 0.1655% ( 3) 00:09:31.191 6067.088 - 6095.707: 0.1788% ( 2) 00:09:31.191 6095.707 - 6124.325: 0.1920% ( 2) 00:09:31.191 6124.325 - 6152.943: 0.2052% ( 2) 00:09:31.191 6152.943 - 6181.562: 0.2251% ( 3) 00:09:31.191 6181.562 - 6210.180: 0.2383% ( 2) 00:09:31.191 6210.180 - 6238.798: 0.2516% ( 2) 00:09:31.191 6238.798 - 6267.417: 0.2715% ( 3) 00:09:31.191 6267.417 - 6296.035: 0.2847% ( 2) 00:09:31.191 6296.035 - 6324.653: 0.3046% ( 3) 00:09:31.191 6324.653 - 6353.272: 0.3178% ( 2) 00:09:31.191 6353.272 - 6381.890: 0.3377% ( 3) 00:09:31.191 6381.890 - 6410.508: 0.3509% ( 2) 00:09:31.191 6410.508 - 6439.127: 0.3641% ( 2) 00:09:31.191 6439.127 - 6467.745: 0.3840% ( 3) 00:09:31.191 6467.745 - 6496.363: 0.3972% ( 2) 00:09:31.191 6496.363 - 6524.982: 0.4105% ( 2) 00:09:31.191 6524.982 - 6553.600: 0.4237% ( 2) 00:09:31.191 7011.493 - 7040.112: 0.4303% ( 1) 00:09:31.191 7040.112 - 7068.730: 0.4568% ( 4) 00:09:31.191 7068.730 - 7097.348: 0.5363% ( 12) 00:09:31.191 7097.348 - 7125.967: 0.6886% ( 23) 00:09:31.191 7125.967 - 7154.585: 0.8739% ( 28) 00:09:31.191 7154.585 - 7183.203: 1.2646% ( 59) 00:09:31.191 7183.203 - 7211.822: 1.6618% ( 60) 00:09:31.191 7211.822 - 7240.440: 2.0988% ( 66) 00:09:31.191 7240.440 - 7269.059: 2.7410% ( 97) 00:09:31.191 7269.059 - 7297.677: 3.6017% ( 130) 00:09:31.191 7297.677 - 7326.295: 4.5154% ( 138) 00:09:31.191 7326.295 - 7383.532: 6.7267% ( 334) 00:09:31.191 7383.532 - 7440.769: 9.5538% ( 427) 00:09:31.191 7440.769 - 7498.005: 12.9966% ( 520) 00:09:31.191 7498.005 - 7555.242: 16.8498% ( 582) 00:09:31.191 7555.242 - 7612.479: 21.0077% ( 628) 00:09:31.191 7612.479 - 7669.715: 25.3972% ( 663) 00:09:31.191 7669.715 - 7726.952: 30.0318% ( 700) 00:09:31.191 7726.952 - 7784.189: 34.8782% ( 732) 00:09:31.191 7784.189 - 7841.425: 39.8239% ( 747) 00:09:31.191 7841.425 - 7898.662: 44.9748% ( 778) 00:09:31.191 7898.662 - 7955.899: 50.3906% ( 818) 00:09:31.191 7955.899 - 8013.135: 55.4621% ( 766) 00:09:31.191 8013.135 - 8070.372: 60.4542% ( 754) 00:09:31.191 8070.372 - 8127.609: 65.1682% ( 712) 00:09:31.191 8127.609 - 8184.845: 69.3856% ( 637) 00:09:31.191 8184.845 - 8242.082: 73.0270% ( 550) 00:09:31.191 8242.082 - 8299.319: 76.2646% ( 489) 00:09:31.191 8299.319 - 8356.555: 78.8930% ( 397) 00:09:31.191 8356.555 - 8413.792: 81.0315% ( 323) 00:09:31.191 8413.792 - 8471.029: 82.8257% ( 271) 00:09:31.191 8471.029 - 8528.266: 84.2227% ( 211) 00:09:31.191 8528.266 - 8585.502: 85.3681% ( 173) 00:09:31.191 8585.502 - 8642.739: 86.3347% ( 146) 00:09:31.191 8642.739 - 8699.976: 87.0895% ( 114) 00:09:31.191 8699.976 - 8757.212: 87.7052% ( 93) 00:09:31.191 8757.212 - 8814.449: 88.2415% ( 81) 00:09:31.191 8814.449 - 8871.686: 88.6520% ( 62) 00:09:31.191 8871.686 - 8928.922: 88.9632% ( 47) 00:09:31.191 8928.922 - 8986.159: 89.2148% ( 38) 00:09:31.191 8986.159 - 9043.396: 89.4068% ( 29) 00:09:31.191 9043.396 - 9100.632: 89.6120% ( 31) 00:09:31.191 9100.632 - 9157.869: 89.8239% ( 32) 00:09:31.191 9157.869 - 9215.106: 90.0490% ( 34) 00:09:31.191 9215.106 - 9272.342: 90.2675% ( 33) 00:09:31.191 9272.342 - 9329.579: 90.4793% ( 32) 00:09:31.191 9329.579 - 9386.816: 90.6713% ( 29) 00:09:31.191 9386.816 - 9444.052: 90.8104% ( 21) 00:09:31.191 9444.052 - 9501.289: 90.9428% ( 20) 00:09:31.191 9501.289 - 9558.526: 91.0686% ( 19) 00:09:31.191 9558.526 - 9615.762: 91.1811% ( 17) 00:09:31.191 9615.762 - 9672.999: 91.2871% ( 16) 00:09:31.191 9672.999 - 9730.236: 91.4195% ( 20) 00:09:31.191 9730.236 - 9787.472: 91.5254% ( 16) 00:09:31.191 9787.472 - 9844.709: 91.6578% ( 20) 00:09:31.191 9844.709 - 9901.946: 91.7903% ( 20) 00:09:31.191 9901.946 - 9959.183: 91.9094% ( 18) 00:09:31.191 9959.183 - 10016.419: 92.0220% ( 17) 00:09:31.191 10016.419 - 10073.656: 92.1147% ( 14) 00:09:31.191 10073.656 - 10130.893: 92.2074% ( 14) 00:09:31.191 10130.893 - 10188.129: 92.3596% ( 23) 00:09:31.191 10188.129 - 10245.366: 92.5252% ( 25) 00:09:31.191 10245.366 - 10302.603: 92.6973% ( 26) 00:09:31.191 10302.603 - 10359.839: 92.8430% ( 22) 00:09:31.191 10359.839 - 10417.076: 92.9489% ( 16) 00:09:31.191 10417.076 - 10474.313: 93.0813% ( 20) 00:09:31.191 10474.313 - 10531.549: 93.2005% ( 18) 00:09:31.191 10531.549 - 10588.786: 93.3197% ( 18) 00:09:31.191 10588.786 - 10646.023: 93.4521% ( 20) 00:09:31.191 10646.023 - 10703.259: 93.5712% ( 18) 00:09:31.191 10703.259 - 10760.496: 93.6904% ( 18) 00:09:31.191 10760.496 - 10817.733: 93.8030% ( 17) 00:09:31.191 10817.733 - 10874.969: 93.9354% ( 20) 00:09:31.191 10874.969 - 10932.206: 94.0479% ( 17) 00:09:31.191 10932.206 - 10989.443: 94.1605% ( 17) 00:09:31.191 10989.443 - 11046.679: 94.2664% ( 16) 00:09:31.191 11046.679 - 11103.916: 94.3525% ( 13) 00:09:31.191 11103.916 - 11161.153: 94.4055% ( 8) 00:09:31.191 11161.153 - 11218.390: 94.4584% ( 8) 00:09:31.191 11218.390 - 11275.626: 94.5048% ( 7) 00:09:31.191 11275.626 - 11332.863: 94.5577% ( 8) 00:09:31.191 11332.863 - 11390.100: 94.6041% ( 7) 00:09:31.191 11390.100 - 11447.336: 94.6504% ( 7) 00:09:31.191 11447.336 - 11504.573: 94.7100% ( 9) 00:09:31.191 11504.573 - 11561.810: 94.7564% ( 7) 00:09:31.191 11561.810 - 11619.046: 94.8093% ( 8) 00:09:31.191 11619.046 - 11676.283: 94.8623% ( 8) 00:09:31.191 11676.283 - 11733.520: 94.9086% ( 7) 00:09:31.191 11733.520 - 11790.756: 94.9484% ( 6) 00:09:31.191 11790.756 - 11847.993: 94.9682% ( 3) 00:09:31.191 11847.993 - 11905.230: 94.9881% ( 3) 00:09:31.191 11905.230 - 11962.466: 95.0079% ( 3) 00:09:31.191 11962.466 - 12019.703: 95.0278% ( 3) 00:09:31.191 12019.703 - 12076.940: 95.0410% ( 2) 00:09:31.191 12076.940 - 12134.176: 95.0609% ( 3) 00:09:31.191 12134.176 - 12191.413: 95.0808% ( 3) 00:09:31.191 12191.413 - 12248.650: 95.1006% ( 3) 00:09:31.191 12248.650 - 12305.886: 95.1337% ( 5) 00:09:31.191 12305.886 - 12363.123: 95.1735% ( 6) 00:09:31.191 12363.123 - 12420.360: 95.2331% ( 9) 00:09:31.191 12420.360 - 12477.597: 95.2993% ( 10) 00:09:31.191 12477.597 - 12534.833: 95.3588% ( 9) 00:09:31.191 12534.833 - 12592.070: 95.4251% ( 10) 00:09:31.191 12592.070 - 12649.307: 95.4780% ( 8) 00:09:31.191 12649.307 - 12706.543: 95.5442% ( 10) 00:09:31.191 12706.543 - 12763.780: 95.6038% ( 9) 00:09:31.191 12763.780 - 12821.017: 95.6634% ( 9) 00:09:31.191 12821.017 - 12878.253: 95.7230% ( 9) 00:09:31.191 12878.253 - 12935.490: 95.7826% ( 9) 00:09:31.191 12935.490 - 12992.727: 95.8289% ( 7) 00:09:31.191 12992.727 - 13049.963: 95.8686% ( 6) 00:09:31.191 13049.963 - 13107.200: 95.9084% ( 6) 00:09:31.191 13107.200 - 13164.437: 95.9481% ( 6) 00:09:31.191 13164.437 - 13221.673: 95.9944% ( 7) 00:09:31.191 13221.673 - 13278.910: 96.0275% ( 5) 00:09:31.191 13278.910 - 13336.147: 96.0739% ( 7) 00:09:31.191 13336.147 - 13393.383: 96.1401% ( 10) 00:09:31.191 13393.383 - 13450.620: 96.1931% ( 8) 00:09:31.191 13450.620 - 13507.857: 96.2195% ( 4) 00:09:31.191 13507.857 - 13565.093: 96.2659% ( 7) 00:09:31.191 13565.093 - 13622.330: 96.3122% ( 7) 00:09:31.191 13622.330 - 13679.567: 96.3387% ( 4) 00:09:31.191 13679.567 - 13736.803: 96.3586% ( 3) 00:09:31.191 13736.803 - 13794.040: 96.3784% ( 3) 00:09:31.191 13794.040 - 13851.277: 96.4049% ( 4) 00:09:31.191 13851.277 - 13908.514: 96.4182% ( 2) 00:09:31.191 13908.514 - 13965.750: 96.4447% ( 4) 00:09:31.191 13965.750 - 14022.987: 96.4645% ( 3) 00:09:31.191 14022.987 - 14080.224: 96.4910% ( 4) 00:09:31.191 14080.224 - 14137.460: 96.5109% ( 3) 00:09:31.191 14137.460 - 14194.697: 96.5506% ( 6) 00:09:31.191 14194.697 - 14251.934: 96.6234% ( 11) 00:09:31.191 14251.934 - 14309.170: 96.7029% ( 12) 00:09:31.191 14309.170 - 14366.407: 96.7558% ( 8) 00:09:31.191 14366.407 - 14423.644: 96.8154% ( 9) 00:09:31.191 14423.644 - 14480.880: 96.8750% ( 9) 00:09:31.191 14480.880 - 14538.117: 96.9478% ( 11) 00:09:31.191 14538.117 - 14595.354: 97.0207% ( 11) 00:09:31.191 14595.354 - 14652.590: 97.1067% ( 13) 00:09:31.191 14652.590 - 14767.064: 97.3318% ( 34) 00:09:31.191 14767.064 - 14881.537: 97.5569% ( 34) 00:09:31.191 14881.537 - 14996.010: 97.7887% ( 35) 00:09:31.191 14996.010 - 15110.484: 98.0270% ( 36) 00:09:31.191 15110.484 - 15224.957: 98.2389% ( 32) 00:09:31.191 15224.957 - 15339.431: 98.4044% ( 25) 00:09:31.191 15339.431 - 15453.904: 98.5368% ( 20) 00:09:31.191 15453.904 - 15568.377: 98.6494% ( 17) 00:09:31.191 15568.377 - 15682.851: 98.7156% ( 10) 00:09:31.191 15682.851 - 15797.324: 98.7288% ( 2) 00:09:31.191 16827.584 - 16942.058: 98.7487% ( 3) 00:09:31.191 16942.058 - 17056.531: 98.7752% ( 4) 00:09:31.191 17056.531 - 17171.004: 98.7884% ( 2) 00:09:31.191 17171.004 - 17285.478: 98.8016% ( 2) 00:09:31.191 17285.478 - 17399.951: 98.8215% ( 3) 00:09:31.192 17399.951 - 17514.424: 98.8414% ( 3) 00:09:31.192 17514.424 - 17628.898: 98.8612% ( 3) 00:09:31.192 17628.898 - 17743.371: 98.8877% ( 4) 00:09:31.192 17743.371 - 17857.845: 98.9142% ( 4) 00:09:31.192 17857.845 - 17972.318: 98.9407% ( 4) 00:09:31.192 17972.318 - 18086.791: 98.9605% ( 3) 00:09:31.192 18086.791 - 18201.265: 98.9804% ( 3) 00:09:31.192 18201.265 - 18315.738: 99.0003% ( 3) 00:09:31.192 18315.738 - 18430.211: 99.0201% ( 3) 00:09:31.192 18430.211 - 18544.685: 99.0267% ( 1) 00:09:31.192 18544.685 - 18659.158: 99.0466% ( 3) 00:09:31.192 18659.158 - 18773.631: 99.0665% ( 3) 00:09:31.192 18773.631 - 18888.105: 99.0863% ( 3) 00:09:31.192 18888.105 - 19002.578: 99.1062% ( 3) 00:09:31.192 19002.578 - 19117.052: 99.1261% ( 3) 00:09:31.192 19117.052 - 19231.525: 99.1459% ( 3) 00:09:31.192 19231.525 - 19345.998: 99.1525% ( 1) 00:09:31.192 23352.566 - 23467.039: 99.1724% ( 3) 00:09:31.192 23467.039 - 23581.513: 99.2121% ( 6) 00:09:31.192 23581.513 - 23695.986: 99.2452% ( 5) 00:09:31.192 23695.986 - 23810.459: 99.2850% ( 6) 00:09:31.192 23810.459 - 23924.933: 99.3181% ( 5) 00:09:31.192 23924.933 - 24039.406: 99.3578% ( 6) 00:09:31.192 24039.406 - 24153.879: 99.3909% ( 5) 00:09:31.192 24153.879 - 24268.353: 99.4240% ( 5) 00:09:31.192 24268.353 - 24382.826: 99.4637% ( 6) 00:09:31.192 24382.826 - 24497.300: 99.5034% ( 6) 00:09:31.192 24497.300 - 24611.773: 99.5432% ( 6) 00:09:31.192 24611.773 - 24726.246: 99.5763% ( 5) 00:09:31.192 30907.808 - 31136.755: 99.6292% ( 8) 00:09:31.192 31136.755 - 31365.701: 99.6822% ( 8) 00:09:31.192 31365.701 - 31594.648: 99.7484% ( 10) 00:09:31.192 31594.648 - 31823.595: 99.8080% ( 9) 00:09:31.192 31823.595 - 32052.541: 99.8742% ( 10) 00:09:31.192 32052.541 - 32281.488: 99.9404% ( 10) 00:09:31.192 32281.488 - 32510.435: 100.0000% ( 9) 00:09:31.192 00:09:31.192 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:31.192 ============================================================================== 00:09:31.192 Range in us Cumulative IO count 00:09:31.192 5036.828 - 5065.446: 0.0199% ( 3) 00:09:31.192 5065.446 - 5094.065: 0.0728% ( 8) 00:09:31.192 5094.065 - 5122.683: 0.0861% ( 2) 00:09:31.192 5122.683 - 5151.301: 0.0993% ( 2) 00:09:31.192 5151.301 - 5179.920: 0.1059% ( 1) 00:09:31.192 5179.920 - 5208.538: 0.1192% ( 2) 00:09:31.192 5208.538 - 5237.156: 0.1258% ( 1) 00:09:31.192 5237.156 - 5265.775: 0.1324% ( 1) 00:09:31.192 5265.775 - 5294.393: 0.1523% ( 3) 00:09:31.192 5294.393 - 5323.011: 0.1589% ( 1) 00:09:31.192 5323.011 - 5351.630: 0.1721% ( 2) 00:09:31.192 5351.630 - 5380.248: 0.1788% ( 1) 00:09:31.192 5380.248 - 5408.866: 0.2052% ( 4) 00:09:31.192 5408.866 - 5437.485: 0.2185% ( 2) 00:09:31.192 5437.485 - 5466.103: 0.2317% ( 2) 00:09:31.192 5466.103 - 5494.721: 0.2516% ( 3) 00:09:31.192 5494.721 - 5523.340: 0.2648% ( 2) 00:09:31.192 5523.340 - 5551.958: 0.2781% ( 2) 00:09:31.192 5551.958 - 5580.576: 0.2979% ( 3) 00:09:31.192 5580.576 - 5609.195: 0.3112% ( 2) 00:09:31.192 5609.195 - 5637.813: 0.3310% ( 3) 00:09:31.192 5637.813 - 5666.431: 0.3443% ( 2) 00:09:31.192 5666.431 - 5695.050: 0.3575% ( 2) 00:09:31.192 5695.050 - 5723.668: 0.3708% ( 2) 00:09:31.192 5723.668 - 5752.286: 0.3906% ( 3) 00:09:31.192 5752.286 - 5780.905: 0.4039% ( 2) 00:09:31.192 5780.905 - 5809.523: 0.4237% ( 3) 00:09:31.192 7011.493 - 7040.112: 0.4370% ( 2) 00:09:31.192 7040.112 - 7068.730: 0.4899% ( 8) 00:09:31.192 7068.730 - 7097.348: 0.5628% ( 11) 00:09:31.192 7097.348 - 7125.967: 0.7084% ( 22) 00:09:31.192 7125.967 - 7154.585: 0.9401% ( 35) 00:09:31.192 7154.585 - 7183.203: 1.3308% ( 59) 00:09:31.192 7183.203 - 7211.822: 1.7677% ( 66) 00:09:31.192 7211.822 - 7240.440: 2.2908% ( 79) 00:09:31.192 7240.440 - 7269.059: 2.9330% ( 97) 00:09:31.192 7269.059 - 7297.677: 3.7805% ( 128) 00:09:31.192 7297.677 - 7326.295: 4.8464% ( 161) 00:09:31.192 7326.295 - 7383.532: 7.1835% ( 353) 00:09:31.192 7383.532 - 7440.769: 10.0371% ( 431) 00:09:31.192 7440.769 - 7498.005: 13.5262% ( 527) 00:09:31.192 7498.005 - 7555.242: 17.1478% ( 547) 00:09:31.192 7555.242 - 7612.479: 21.3453% ( 634) 00:09:31.192 7612.479 - 7669.715: 25.7614% ( 667) 00:09:31.192 7669.715 - 7726.952: 30.2569% ( 679) 00:09:31.192 7726.952 - 7784.189: 35.0900% ( 730) 00:09:31.192 7784.189 - 7841.425: 40.0887% ( 755) 00:09:31.192 7841.425 - 7898.662: 45.2860% ( 785) 00:09:31.192 7898.662 - 7955.899: 50.5297% ( 792) 00:09:31.192 7955.899 - 8013.135: 55.6674% ( 776) 00:09:31.192 8013.135 - 8070.372: 60.5138% ( 732) 00:09:31.192 8070.372 - 8127.609: 65.2675% ( 718) 00:09:31.192 8127.609 - 8184.845: 69.5710% ( 650) 00:09:31.192 8184.845 - 8242.082: 73.2124% ( 550) 00:09:31.192 8242.082 - 8299.319: 76.3970% ( 481) 00:09:31.192 8299.319 - 8356.555: 78.9791% ( 390) 00:09:31.192 8356.555 - 8413.792: 81.2500% ( 343) 00:09:31.192 8413.792 - 8471.029: 82.9515% ( 257) 00:09:31.192 8471.029 - 8528.266: 84.3551% ( 212) 00:09:31.192 8528.266 - 8585.502: 85.5005% ( 173) 00:09:31.192 8585.502 - 8642.739: 86.4539% ( 144) 00:09:31.192 8642.739 - 8699.976: 87.2815% ( 125) 00:09:31.192 8699.976 - 8757.212: 87.8972% ( 93) 00:09:31.192 8757.212 - 8814.449: 88.4468% ( 83) 00:09:31.192 8814.449 - 8871.686: 88.8242% ( 57) 00:09:31.192 8871.686 - 8928.922: 89.1883% ( 55) 00:09:31.192 8928.922 - 8986.159: 89.4862% ( 45) 00:09:31.192 8986.159 - 9043.396: 89.7312% ( 37) 00:09:31.192 9043.396 - 9100.632: 89.9431% ( 32) 00:09:31.192 9100.632 - 9157.869: 90.1549% ( 32) 00:09:31.192 9157.869 - 9215.106: 90.3734% ( 33) 00:09:31.192 9215.106 - 9272.342: 90.5389% ( 25) 00:09:31.192 9272.342 - 9329.579: 90.7177% ( 27) 00:09:31.192 9329.579 - 9386.816: 90.8700% ( 23) 00:09:31.192 9386.816 - 9444.052: 91.0156% ( 22) 00:09:31.192 9444.052 - 9501.289: 91.1414% ( 19) 00:09:31.192 9501.289 - 9558.526: 91.2871% ( 22) 00:09:31.192 9558.526 - 9615.762: 91.4195% ( 20) 00:09:31.192 9615.762 - 9672.999: 91.5453% ( 19) 00:09:31.192 9672.999 - 9730.236: 91.6909% ( 22) 00:09:31.192 9730.236 - 9787.472: 91.8366% ( 22) 00:09:31.192 9787.472 - 9844.709: 91.9889% ( 23) 00:09:31.192 9844.709 - 9901.946: 92.1147% ( 19) 00:09:31.192 9901.946 - 9959.183: 92.2405% ( 19) 00:09:31.192 9959.183 - 10016.419: 92.3398% ( 15) 00:09:31.192 10016.419 - 10073.656: 92.4258% ( 13) 00:09:31.192 10073.656 - 10130.893: 92.5185% ( 14) 00:09:31.192 10130.893 - 10188.129: 92.6178% ( 15) 00:09:31.192 10188.129 - 10245.366: 92.6774% ( 9) 00:09:31.192 10245.366 - 10302.603: 92.7635% ( 13) 00:09:31.192 10302.603 - 10359.839: 92.8628% ( 15) 00:09:31.192 10359.839 - 10417.076: 92.9290% ( 10) 00:09:31.192 10417.076 - 10474.313: 93.0019% ( 11) 00:09:31.192 10474.313 - 10531.549: 93.0614% ( 9) 00:09:31.192 10531.549 - 10588.786: 93.1276% ( 10) 00:09:31.192 10588.786 - 10646.023: 93.1939% ( 10) 00:09:31.192 10646.023 - 10703.259: 93.2601% ( 10) 00:09:31.192 10703.259 - 10760.496: 93.3197% ( 9) 00:09:31.192 10760.496 - 10817.733: 93.3859% ( 10) 00:09:31.192 10817.733 - 10874.969: 93.4653% ( 12) 00:09:31.192 10874.969 - 10932.206: 93.5249% ( 9) 00:09:31.192 10932.206 - 10989.443: 93.5779% ( 8) 00:09:31.192 10989.443 - 11046.679: 93.6242% ( 7) 00:09:31.192 11046.679 - 11103.916: 93.6772% ( 8) 00:09:31.192 11103.916 - 11161.153: 93.7301% ( 8) 00:09:31.192 11161.153 - 11218.390: 93.7831% ( 8) 00:09:31.192 11218.390 - 11275.626: 93.8361% ( 8) 00:09:31.192 11275.626 - 11332.863: 93.9221% ( 13) 00:09:31.192 11332.863 - 11390.100: 94.0016% ( 12) 00:09:31.192 11390.100 - 11447.336: 94.1406% ( 21) 00:09:31.192 11447.336 - 11504.573: 94.2532% ( 17) 00:09:31.192 11504.573 - 11561.810: 94.3591% ( 16) 00:09:31.192 11561.810 - 11619.046: 94.4849% ( 19) 00:09:31.192 11619.046 - 11676.283: 94.6041% ( 18) 00:09:31.192 11676.283 - 11733.520: 94.7431% ( 21) 00:09:31.192 11733.520 - 11790.756: 94.8954% ( 23) 00:09:31.192 11790.756 - 11847.993: 95.0013% ( 16) 00:09:31.192 11847.993 - 11905.230: 95.1073% ( 16) 00:09:31.192 11905.230 - 11962.466: 95.2132% ( 16) 00:09:31.192 11962.466 - 12019.703: 95.3191% ( 16) 00:09:31.192 12019.703 - 12076.940: 95.4184% ( 15) 00:09:31.192 12076.940 - 12134.176: 95.5244% ( 16) 00:09:31.192 12134.176 - 12191.413: 95.6038% ( 12) 00:09:31.192 12191.413 - 12248.650: 95.6766% ( 11) 00:09:31.192 12248.650 - 12305.886: 95.7495% ( 11) 00:09:31.192 12305.886 - 12363.123: 95.8091% ( 9) 00:09:31.192 12363.123 - 12420.360: 95.8686% ( 9) 00:09:31.192 12420.360 - 12477.597: 95.9282% ( 9) 00:09:31.192 12477.597 - 12534.833: 95.9878% ( 9) 00:09:31.192 12534.833 - 12592.070: 96.0342% ( 7) 00:09:31.192 12592.070 - 12649.307: 96.0739% ( 6) 00:09:31.192 12649.307 - 12706.543: 96.1136% ( 6) 00:09:31.192 12706.543 - 12763.780: 96.1533% ( 6) 00:09:31.192 12763.780 - 12821.017: 96.1666% ( 2) 00:09:31.192 12821.017 - 12878.253: 96.1798% ( 2) 00:09:31.192 12878.253 - 12935.490: 96.1864% ( 1) 00:09:31.192 12935.490 - 12992.727: 96.1997% ( 2) 00:09:31.192 12992.727 - 13049.963: 96.2063% ( 1) 00:09:31.192 13049.963 - 13107.200: 96.2129% ( 1) 00:09:31.192 13107.200 - 13164.437: 96.2262% ( 2) 00:09:31.192 13164.437 - 13221.673: 96.2328% ( 1) 00:09:31.192 13221.673 - 13278.910: 96.2394% ( 1) 00:09:31.192 13278.910 - 13336.147: 96.2593% ( 3) 00:09:31.192 13336.147 - 13393.383: 96.2858% ( 4) 00:09:31.192 13393.383 - 13450.620: 96.3255% ( 6) 00:09:31.192 13450.620 - 13507.857: 96.3520% ( 4) 00:09:31.192 13507.857 - 13565.093: 96.3851% ( 5) 00:09:31.192 13565.093 - 13622.330: 96.4115% ( 4) 00:09:31.192 13622.330 - 13679.567: 96.4447% ( 5) 00:09:31.192 13679.567 - 13736.803: 96.4844% ( 6) 00:09:31.192 13736.803 - 13794.040: 96.5109% ( 4) 00:09:31.192 13794.040 - 13851.277: 96.5440% ( 5) 00:09:31.192 13851.277 - 13908.514: 96.5704% ( 4) 00:09:31.192 13908.514 - 13965.750: 96.6168% ( 7) 00:09:31.192 13965.750 - 14022.987: 96.6830% ( 10) 00:09:31.192 14022.987 - 14080.224: 96.7426% ( 9) 00:09:31.192 14080.224 - 14137.460: 96.8088% ( 10) 00:09:31.192 14137.460 - 14194.697: 96.8750% ( 10) 00:09:31.192 14194.697 - 14251.934: 96.9412% ( 10) 00:09:31.192 14251.934 - 14309.170: 97.0074% ( 10) 00:09:31.193 14309.170 - 14366.407: 97.0736% ( 10) 00:09:31.193 14366.407 - 14423.644: 97.1465% ( 11) 00:09:31.193 14423.644 - 14480.880: 97.2060% ( 9) 00:09:31.193 14480.880 - 14538.117: 97.2590% ( 8) 00:09:31.193 14538.117 - 14595.354: 97.2987% ( 6) 00:09:31.193 14595.354 - 14652.590: 97.3451% ( 7) 00:09:31.193 14652.590 - 14767.064: 97.4378% ( 14) 00:09:31.193 14767.064 - 14881.537: 97.5371% ( 15) 00:09:31.193 14881.537 - 14996.010: 97.6761% ( 21) 00:09:31.193 14996.010 - 15110.484: 97.7887% ( 17) 00:09:31.193 15110.484 - 15224.957: 97.8747% ( 13) 00:09:31.193 15224.957 - 15339.431: 97.9674% ( 14) 00:09:31.193 15339.431 - 15453.904: 98.0403% ( 11) 00:09:31.193 15453.904 - 15568.377: 98.1065% ( 10) 00:09:31.193 15568.377 - 15682.851: 98.2124% ( 16) 00:09:31.193 15682.851 - 15797.324: 98.3316% ( 18) 00:09:31.193 15797.324 - 15911.797: 98.4375% ( 16) 00:09:31.193 15911.797 - 16026.271: 98.4838% ( 7) 00:09:31.193 16026.271 - 16140.744: 98.5302% ( 7) 00:09:31.193 16140.744 - 16255.217: 98.5765% ( 7) 00:09:31.193 16255.217 - 16369.691: 98.6229% ( 7) 00:09:31.193 16369.691 - 16484.164: 98.6957% ( 11) 00:09:31.193 16484.164 - 16598.638: 98.7619% ( 10) 00:09:31.193 16598.638 - 16713.111: 98.7950% ( 5) 00:09:31.193 16713.111 - 16827.584: 98.8149% ( 3) 00:09:31.193 16827.584 - 16942.058: 98.8281% ( 2) 00:09:31.193 16942.058 - 17056.531: 98.8546% ( 4) 00:09:31.193 17056.531 - 17171.004: 98.8745% ( 3) 00:09:31.193 17171.004 - 17285.478: 98.8943% ( 3) 00:09:31.193 17285.478 - 17399.951: 98.9142% ( 3) 00:09:31.193 17399.951 - 17514.424: 98.9341% ( 3) 00:09:31.193 17514.424 - 17628.898: 98.9539% ( 3) 00:09:31.193 17628.898 - 17743.371: 98.9738% ( 3) 00:09:31.193 17743.371 - 17857.845: 98.9936% ( 3) 00:09:31.193 17857.845 - 17972.318: 99.0135% ( 3) 00:09:31.193 17972.318 - 18086.791: 99.0334% ( 3) 00:09:31.193 18086.791 - 18201.265: 99.0532% ( 3) 00:09:31.193 18201.265 - 18315.738: 99.0731% ( 3) 00:09:31.193 18315.738 - 18430.211: 99.0996% ( 4) 00:09:31.193 18430.211 - 18544.685: 99.1194% ( 3) 00:09:31.193 18544.685 - 18659.158: 99.1393% ( 3) 00:09:31.193 18659.158 - 18773.631: 99.1525% ( 2) 00:09:31.193 23924.933 - 24039.406: 99.1923% ( 6) 00:09:31.193 24039.406 - 24153.879: 99.2254% ( 5) 00:09:31.193 24153.879 - 24268.353: 99.2585% ( 5) 00:09:31.193 24268.353 - 24382.826: 99.2982% ( 6) 00:09:31.193 24382.826 - 24497.300: 99.3313% ( 5) 00:09:31.193 24497.300 - 24611.773: 99.3710% ( 6) 00:09:31.193 24611.773 - 24726.246: 99.3975% ( 4) 00:09:31.193 24726.246 - 24840.720: 99.4240% ( 4) 00:09:31.193 24840.720 - 24955.193: 99.4571% ( 5) 00:09:31.193 24955.193 - 25069.666: 99.4836% ( 4) 00:09:31.193 25069.666 - 25184.140: 99.5167% ( 5) 00:09:31.193 25184.140 - 25298.613: 99.5498% ( 5) 00:09:31.193 25298.613 - 25413.086: 99.5763% ( 4) 00:09:31.193 30907.808 - 31136.755: 99.6094% ( 5) 00:09:31.193 31136.755 - 31365.701: 99.6690% ( 9) 00:09:31.193 31365.701 - 31594.648: 99.7219% ( 8) 00:09:31.193 31594.648 - 31823.595: 99.7815% ( 9) 00:09:31.193 31823.595 - 32052.541: 99.8345% ( 8) 00:09:31.193 32052.541 - 32281.488: 99.8941% ( 9) 00:09:31.193 32281.488 - 32510.435: 99.9603% ( 10) 00:09:31.193 32510.435 - 32739.382: 100.0000% ( 6) 00:09:31.193 00:09:31.193 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:31.193 ============================================================================== 00:09:31.193 Range in us Cumulative IO count 00:09:31.193 4779.263 - 4807.881: 0.0132% ( 2) 00:09:31.193 4807.881 - 4836.500: 0.0397% ( 4) 00:09:31.193 4836.500 - 4865.118: 0.0662% ( 4) 00:09:31.193 4865.118 - 4893.736: 0.0794% ( 2) 00:09:31.193 4893.736 - 4922.355: 0.0927% ( 2) 00:09:31.193 4922.355 - 4950.973: 0.1059% ( 2) 00:09:31.193 4950.973 - 4979.591: 0.1192% ( 2) 00:09:31.193 4979.591 - 5008.210: 0.1390% ( 3) 00:09:31.193 5008.210 - 5036.828: 0.1589% ( 3) 00:09:31.193 5036.828 - 5065.446: 0.1721% ( 2) 00:09:31.193 5065.446 - 5094.065: 0.1854% ( 2) 00:09:31.193 5094.065 - 5122.683: 0.1986% ( 2) 00:09:31.193 5122.683 - 5151.301: 0.2185% ( 3) 00:09:31.193 5151.301 - 5179.920: 0.2317% ( 2) 00:09:31.193 5179.920 - 5208.538: 0.2516% ( 3) 00:09:31.193 5208.538 - 5237.156: 0.2648% ( 2) 00:09:31.193 5237.156 - 5265.775: 0.2781% ( 2) 00:09:31.193 5265.775 - 5294.393: 0.2979% ( 3) 00:09:31.193 5294.393 - 5323.011: 0.3112% ( 2) 00:09:31.193 5323.011 - 5351.630: 0.3310% ( 3) 00:09:31.193 5351.630 - 5380.248: 0.3443% ( 2) 00:09:31.193 5380.248 - 5408.866: 0.3641% ( 3) 00:09:31.193 5408.866 - 5437.485: 0.3774% ( 2) 00:09:31.193 5437.485 - 5466.103: 0.3906% ( 2) 00:09:31.193 5466.103 - 5494.721: 0.4105% ( 3) 00:09:31.193 5494.721 - 5523.340: 0.4237% ( 2) 00:09:31.193 6868.402 - 6897.020: 0.4502% ( 4) 00:09:31.193 6897.020 - 6925.638: 0.4966% ( 7) 00:09:31.193 6925.638 - 6954.257: 0.5032% ( 1) 00:09:31.193 6954.257 - 6982.875: 0.5164% ( 2) 00:09:31.193 6982.875 - 7011.493: 0.5230% ( 1) 00:09:31.193 7011.493 - 7040.112: 0.5561% ( 5) 00:09:31.193 7040.112 - 7068.730: 0.6091% ( 8) 00:09:31.193 7068.730 - 7097.348: 0.7349% ( 19) 00:09:31.193 7097.348 - 7125.967: 0.9137% ( 27) 00:09:31.193 7125.967 - 7154.585: 1.1520% ( 36) 00:09:31.193 7154.585 - 7183.203: 1.4632% ( 47) 00:09:31.193 7183.203 - 7211.822: 1.8406% ( 57) 00:09:31.193 7211.822 - 7240.440: 2.3438% ( 76) 00:09:31.193 7240.440 - 7269.059: 3.0124% ( 101) 00:09:31.193 7269.059 - 7297.677: 3.8864% ( 132) 00:09:31.193 7297.677 - 7326.295: 4.9192% ( 156) 00:09:31.193 7326.295 - 7383.532: 7.3093% ( 361) 00:09:31.193 7383.532 - 7440.769: 10.0503% ( 414) 00:09:31.193 7440.769 - 7498.005: 13.4600% ( 515) 00:09:31.193 7498.005 - 7555.242: 17.1941% ( 564) 00:09:31.193 7555.242 - 7612.479: 21.1798% ( 602) 00:09:31.193 7612.479 - 7669.715: 25.6621% ( 677) 00:09:31.193 7669.715 - 7726.952: 30.1841% ( 683) 00:09:31.193 7726.952 - 7784.189: 35.1496% ( 750) 00:09:31.193 7784.189 - 7841.425: 40.2807% ( 775) 00:09:31.193 7841.425 - 7898.662: 45.3721% ( 769) 00:09:31.193 7898.662 - 7955.899: 50.6025% ( 790) 00:09:31.193 7955.899 - 8013.135: 55.8594% ( 794) 00:09:31.193 8013.135 - 8070.372: 60.8647% ( 756) 00:09:31.193 8070.372 - 8127.609: 65.5522% ( 708) 00:09:31.193 8127.609 - 8184.845: 69.8888% ( 655) 00:09:31.193 8184.845 - 8242.082: 73.4640% ( 540) 00:09:31.193 8242.082 - 8299.319: 76.6221% ( 477) 00:09:31.193 8299.319 - 8356.555: 79.2307% ( 394) 00:09:31.193 8356.555 - 8413.792: 81.4619% ( 337) 00:09:31.193 8413.792 - 8471.029: 83.2693% ( 273) 00:09:31.193 8471.029 - 8528.266: 84.7193% ( 219) 00:09:31.193 8528.266 - 8585.502: 85.8978% ( 178) 00:09:31.193 8585.502 - 8642.739: 86.8578% ( 145) 00:09:31.193 8642.739 - 8699.976: 87.6059% ( 113) 00:09:31.193 8699.976 - 8757.212: 88.2415% ( 96) 00:09:31.193 8757.212 - 8814.449: 88.6984% ( 69) 00:09:31.193 8814.449 - 8871.686: 89.0824% ( 58) 00:09:31.193 8871.686 - 8928.922: 89.4200% ( 51) 00:09:31.193 8928.922 - 8986.159: 89.6650% ( 37) 00:09:31.193 8986.159 - 9043.396: 89.8636% ( 30) 00:09:31.193 9043.396 - 9100.632: 90.0424% ( 27) 00:09:31.193 9100.632 - 9157.869: 90.2145% ( 26) 00:09:31.193 9157.869 - 9215.106: 90.3867% ( 26) 00:09:31.193 9215.106 - 9272.342: 90.5456% ( 24) 00:09:31.193 9272.342 - 9329.579: 90.6978% ( 23) 00:09:31.193 9329.579 - 9386.816: 90.8567% ( 24) 00:09:31.193 9386.816 - 9444.052: 91.0090% ( 23) 00:09:31.193 9444.052 - 9501.289: 91.1547% ( 22) 00:09:31.193 9501.289 - 9558.526: 91.3202% ( 25) 00:09:31.193 9558.526 - 9615.762: 91.4791% ( 24) 00:09:31.193 9615.762 - 9672.999: 91.6314% ( 23) 00:09:31.193 9672.999 - 9730.236: 91.7505% ( 18) 00:09:31.193 9730.236 - 9787.472: 91.8697% ( 18) 00:09:31.193 9787.472 - 9844.709: 91.9889% ( 18) 00:09:31.193 9844.709 - 9901.946: 92.1081% ( 18) 00:09:31.193 9901.946 - 9959.183: 92.1875% ( 12) 00:09:31.193 9959.183 - 10016.419: 92.3001% ( 17) 00:09:31.193 10016.419 - 10073.656: 92.4192% ( 18) 00:09:31.193 10073.656 - 10130.893: 92.5185% ( 15) 00:09:31.193 10130.893 - 10188.129: 92.6178% ( 15) 00:09:31.193 10188.129 - 10245.366: 92.7172% ( 15) 00:09:31.193 10245.366 - 10302.603: 92.8231% ( 16) 00:09:31.193 10302.603 - 10359.839: 92.9158% ( 14) 00:09:31.193 10359.839 - 10417.076: 93.0217% ( 16) 00:09:31.193 10417.076 - 10474.313: 93.1078% ( 13) 00:09:31.194 10474.313 - 10531.549: 93.2137% ( 16) 00:09:31.194 10531.549 - 10588.786: 93.2932% ( 12) 00:09:31.194 10588.786 - 10646.023: 93.3726% ( 12) 00:09:31.194 10646.023 - 10703.259: 93.4521% ( 12) 00:09:31.194 10703.259 - 10760.496: 93.5315% ( 12) 00:09:31.194 10760.496 - 10817.733: 93.5977% ( 10) 00:09:31.194 10817.733 - 10874.969: 93.6706% ( 11) 00:09:31.194 10874.969 - 10932.206: 93.7301% ( 9) 00:09:31.194 10932.206 - 10989.443: 93.7963% ( 10) 00:09:31.194 10989.443 - 11046.679: 93.8626% ( 10) 00:09:31.194 11046.679 - 11103.916: 93.9354% ( 11) 00:09:31.194 11103.916 - 11161.153: 94.0215% ( 13) 00:09:31.194 11161.153 - 11218.390: 94.1009% ( 12) 00:09:31.194 11218.390 - 11275.626: 94.1803% ( 12) 00:09:31.194 11275.626 - 11332.863: 94.2399% ( 9) 00:09:31.194 11332.863 - 11390.100: 94.2929% ( 8) 00:09:31.194 11390.100 - 11447.336: 94.3790% ( 13) 00:09:31.194 11447.336 - 11504.573: 94.4452% ( 10) 00:09:31.194 11504.573 - 11561.810: 94.5114% ( 10) 00:09:31.194 11561.810 - 11619.046: 94.5644% ( 8) 00:09:31.194 11619.046 - 11676.283: 94.6239% ( 9) 00:09:31.194 11676.283 - 11733.520: 94.6703% ( 7) 00:09:31.194 11733.520 - 11790.756: 94.7233% ( 8) 00:09:31.194 11790.756 - 11847.993: 94.7762% ( 8) 00:09:31.194 11847.993 - 11905.230: 94.8292% ( 8) 00:09:31.194 11905.230 - 11962.466: 94.8822% ( 8) 00:09:31.194 11962.466 - 12019.703: 94.9616% ( 12) 00:09:31.194 12019.703 - 12076.940: 95.0477% ( 13) 00:09:31.194 12076.940 - 12134.176: 95.1205% ( 11) 00:09:31.194 12134.176 - 12191.413: 95.2132% ( 14) 00:09:31.194 12191.413 - 12248.650: 95.3257% ( 17) 00:09:31.194 12248.650 - 12305.886: 95.4052% ( 12) 00:09:31.194 12305.886 - 12363.123: 95.4979% ( 14) 00:09:31.194 12363.123 - 12420.360: 95.5906% ( 14) 00:09:31.194 12420.360 - 12477.597: 95.6899% ( 15) 00:09:31.194 12477.597 - 12534.833: 95.7693% ( 12) 00:09:31.194 12534.833 - 12592.070: 95.8355% ( 10) 00:09:31.194 12592.070 - 12649.307: 95.9216% ( 13) 00:09:31.194 12649.307 - 12706.543: 96.0011% ( 12) 00:09:31.194 12706.543 - 12763.780: 96.0805% ( 12) 00:09:31.194 12763.780 - 12821.017: 96.1600% ( 12) 00:09:31.194 12821.017 - 12878.253: 96.2328% ( 11) 00:09:31.194 12878.253 - 12935.490: 96.2791% ( 7) 00:09:31.194 12935.490 - 12992.727: 96.3321% ( 8) 00:09:31.194 12992.727 - 13049.963: 96.3917% ( 9) 00:09:31.194 13049.963 - 13107.200: 96.4447% ( 8) 00:09:31.194 13107.200 - 13164.437: 96.5042% ( 9) 00:09:31.194 13164.437 - 13221.673: 96.5440% ( 6) 00:09:31.194 13221.673 - 13278.910: 96.5704% ( 4) 00:09:31.194 13278.910 - 13336.147: 96.6035% ( 5) 00:09:31.194 13336.147 - 13393.383: 96.6300% ( 4) 00:09:31.194 13393.383 - 13450.620: 96.6631% ( 5) 00:09:31.194 13450.620 - 13507.857: 96.6896% ( 4) 00:09:31.194 13507.857 - 13565.093: 96.7227% ( 5) 00:09:31.194 13565.093 - 13622.330: 96.7492% ( 4) 00:09:31.194 13622.330 - 13679.567: 96.7956% ( 7) 00:09:31.194 13679.567 - 13736.803: 96.8551% ( 9) 00:09:31.194 13736.803 - 13794.040: 96.9015% ( 7) 00:09:31.194 13794.040 - 13851.277: 96.9611% ( 9) 00:09:31.194 13851.277 - 13908.514: 97.0074% ( 7) 00:09:31.194 13908.514 - 13965.750: 97.0471% ( 6) 00:09:31.194 13965.750 - 14022.987: 97.0802% ( 5) 00:09:31.194 14022.987 - 14080.224: 97.1133% ( 5) 00:09:31.194 14080.224 - 14137.460: 97.1465% ( 5) 00:09:31.194 14137.460 - 14194.697: 97.1796% ( 5) 00:09:31.194 14194.697 - 14251.934: 97.2193% ( 6) 00:09:31.194 14251.934 - 14309.170: 97.2524% ( 5) 00:09:31.194 14309.170 - 14366.407: 97.2855% ( 5) 00:09:31.194 14366.407 - 14423.644: 97.3186% ( 5) 00:09:31.194 14423.644 - 14480.880: 97.3517% ( 5) 00:09:31.194 14480.880 - 14538.117: 97.3914% ( 6) 00:09:31.194 14538.117 - 14595.354: 97.4113% ( 3) 00:09:31.194 14595.354 - 14652.590: 97.4378% ( 4) 00:09:31.194 14652.590 - 14767.064: 97.4576% ( 3) 00:09:31.194 14881.537 - 14996.010: 97.4841% ( 4) 00:09:31.194 14996.010 - 15110.484: 97.5106% ( 4) 00:09:31.194 15110.484 - 15224.957: 97.5371% ( 4) 00:09:31.194 15224.957 - 15339.431: 97.5569% ( 3) 00:09:31.194 15339.431 - 15453.904: 97.6099% ( 8) 00:09:31.194 15453.904 - 15568.377: 97.6827% ( 11) 00:09:31.194 15568.377 - 15682.851: 97.7556% ( 11) 00:09:31.194 15682.851 - 15797.324: 97.8284% ( 11) 00:09:31.194 15797.324 - 15911.797: 97.9145% ( 13) 00:09:31.194 15911.797 - 16026.271: 98.0403% ( 19) 00:09:31.194 16026.271 - 16140.744: 98.1793% ( 21) 00:09:31.194 16140.744 - 16255.217: 98.3183% ( 21) 00:09:31.194 16255.217 - 16369.691: 98.4507% ( 20) 00:09:31.194 16369.691 - 16484.164: 98.5633% ( 17) 00:09:31.194 16484.164 - 16598.638: 98.6560% ( 14) 00:09:31.194 16598.638 - 16713.111: 98.7487% ( 14) 00:09:31.194 16713.111 - 16827.584: 98.8347% ( 13) 00:09:31.194 16827.584 - 16942.058: 98.8943% ( 9) 00:09:31.194 16942.058 - 17056.531: 98.9539% ( 9) 00:09:31.194 17056.531 - 17171.004: 98.9738% ( 3) 00:09:31.194 17171.004 - 17285.478: 98.9936% ( 3) 00:09:31.194 17285.478 - 17399.951: 99.0135% ( 3) 00:09:31.194 17399.951 - 17514.424: 99.0334% ( 3) 00:09:31.194 17514.424 - 17628.898: 99.0532% ( 3) 00:09:31.194 17628.898 - 17743.371: 99.0731% ( 3) 00:09:31.194 17743.371 - 17857.845: 99.0930% ( 3) 00:09:31.194 17857.845 - 17972.318: 99.1128% ( 3) 00:09:31.194 17972.318 - 18086.791: 99.1327% ( 3) 00:09:31.194 18086.791 - 18201.265: 99.1525% ( 3) 00:09:31.194 23467.039 - 23581.513: 99.1790% ( 4) 00:09:31.194 23581.513 - 23695.986: 99.2121% ( 5) 00:09:31.194 23695.986 - 23810.459: 99.2386% ( 4) 00:09:31.194 23810.459 - 23924.933: 99.2717% ( 5) 00:09:31.194 23924.933 - 24039.406: 99.3048% ( 5) 00:09:31.194 24039.406 - 24153.879: 99.3379% ( 5) 00:09:31.194 24153.879 - 24268.353: 99.3644% ( 4) 00:09:31.194 24268.353 - 24382.826: 99.3909% ( 4) 00:09:31.194 24382.826 - 24497.300: 99.4240% ( 5) 00:09:31.194 24497.300 - 24611.773: 99.4571% ( 5) 00:09:31.194 24611.773 - 24726.246: 99.4902% ( 5) 00:09:31.194 24726.246 - 24840.720: 99.5167% ( 4) 00:09:31.194 24840.720 - 24955.193: 99.5498% ( 5) 00:09:31.194 24955.193 - 25069.666: 99.5697% ( 3) 00:09:31.194 25069.666 - 25184.140: 99.5763% ( 1) 00:09:31.194 30449.914 - 30678.861: 99.6094% ( 5) 00:09:31.194 30678.861 - 30907.808: 99.6690% ( 9) 00:09:31.194 30907.808 - 31136.755: 99.7285% ( 9) 00:09:31.194 31136.755 - 31365.701: 99.7948% ( 10) 00:09:31.194 31365.701 - 31594.648: 99.8477% ( 8) 00:09:31.194 31594.648 - 31823.595: 99.9139% ( 10) 00:09:31.194 31823.595 - 32052.541: 99.9801% ( 10) 00:09:31.194 32052.541 - 32281.488: 100.0000% ( 3) 00:09:31.194 00:09:31.194 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:31.194 ============================================================================== 00:09:31.194 Range in us Cumulative IO count 00:09:31.194 4435.843 - 4464.461: 0.0331% ( 5) 00:09:31.194 4464.461 - 4493.079: 0.0463% ( 2) 00:09:31.194 4493.079 - 4521.698: 0.0596% ( 2) 00:09:31.194 4521.698 - 4550.316: 0.0728% ( 2) 00:09:31.194 4550.316 - 4578.934: 0.0927% ( 3) 00:09:31.194 4578.934 - 4607.553: 0.1059% ( 2) 00:09:31.194 4607.553 - 4636.171: 0.1192% ( 2) 00:09:31.194 4636.171 - 4664.790: 0.1390% ( 3) 00:09:31.194 4664.790 - 4693.408: 0.1523% ( 2) 00:09:31.194 4693.408 - 4722.026: 0.1721% ( 3) 00:09:31.194 4722.026 - 4750.645: 0.1854% ( 2) 00:09:31.194 4750.645 - 4779.263: 0.2052% ( 3) 00:09:31.194 4779.263 - 4807.881: 0.2119% ( 1) 00:09:31.194 4807.881 - 4836.500: 0.2317% ( 3) 00:09:31.194 4836.500 - 4865.118: 0.2450% ( 2) 00:09:31.194 4865.118 - 4893.736: 0.2648% ( 3) 00:09:31.194 4893.736 - 4922.355: 0.2781% ( 2) 00:09:31.194 4922.355 - 4950.973: 0.2913% ( 2) 00:09:31.194 4950.973 - 4979.591: 0.3112% ( 3) 00:09:31.194 4979.591 - 5008.210: 0.3244% ( 2) 00:09:31.194 5008.210 - 5036.828: 0.3443% ( 3) 00:09:31.194 5036.828 - 5065.446: 0.3575% ( 2) 00:09:31.194 5065.446 - 5094.065: 0.3708% ( 2) 00:09:31.194 5094.065 - 5122.683: 0.3840% ( 2) 00:09:31.194 5122.683 - 5151.301: 0.3972% ( 2) 00:09:31.194 5151.301 - 5179.920: 0.4105% ( 2) 00:09:31.194 5179.920 - 5208.538: 0.4237% ( 2) 00:09:31.194 6582.218 - 6610.837: 0.4701% ( 7) 00:09:31.194 6610.837 - 6639.455: 0.4966% ( 4) 00:09:31.194 6639.455 - 6668.073: 0.5032% ( 1) 00:09:31.194 6668.073 - 6696.692: 0.5098% ( 1) 00:09:31.194 6696.692 - 6725.310: 0.5230% ( 2) 00:09:31.194 6725.310 - 6753.928: 0.5429% ( 3) 00:09:31.194 6753.928 - 6782.547: 0.5561% ( 2) 00:09:31.194 6782.547 - 6811.165: 0.5694% ( 2) 00:09:31.194 6811.165 - 6839.783: 0.5826% ( 2) 00:09:31.194 6839.783 - 6868.402: 0.6025% ( 3) 00:09:31.194 6868.402 - 6897.020: 0.6157% ( 2) 00:09:31.194 6897.020 - 6925.638: 0.6290% ( 2) 00:09:31.194 6925.638 - 6954.257: 0.6422% ( 2) 00:09:31.194 6954.257 - 6982.875: 0.6621% ( 3) 00:09:31.194 6982.875 - 7011.493: 0.6753% ( 2) 00:09:31.194 7011.493 - 7040.112: 0.7217% ( 7) 00:09:31.194 7040.112 - 7068.730: 0.8077% ( 13) 00:09:31.194 7068.730 - 7097.348: 0.9269% ( 18) 00:09:31.194 7097.348 - 7125.967: 1.0792% ( 23) 00:09:31.194 7125.967 - 7154.585: 1.2844% ( 31) 00:09:31.194 7154.585 - 7183.203: 1.5956% ( 47) 00:09:31.194 7183.203 - 7211.822: 1.9995% ( 61) 00:09:31.194 7211.822 - 7240.440: 2.5821% ( 88) 00:09:31.194 7240.440 - 7269.059: 3.2773% ( 105) 00:09:31.194 7269.059 - 7297.677: 4.0784% ( 121) 00:09:31.194 7297.677 - 7326.295: 5.0384% ( 145) 00:09:31.194 7326.295 - 7383.532: 7.2034% ( 327) 00:09:31.194 7383.532 - 7440.769: 10.0172% ( 425) 00:09:31.194 7440.769 - 7498.005: 13.4799% ( 523) 00:09:31.194 7498.005 - 7555.242: 17.4192% ( 595) 00:09:31.194 7555.242 - 7612.479: 21.4248% ( 605) 00:09:31.194 7612.479 - 7669.715: 25.7548% ( 654) 00:09:31.194 7669.715 - 7726.952: 30.5151% ( 719) 00:09:31.194 7726.952 - 7784.189: 35.2887% ( 721) 00:09:31.194 7784.189 - 7841.425: 40.2542% ( 750) 00:09:31.194 7841.425 - 7898.662: 45.2264% ( 751) 00:09:31.194 7898.662 - 7955.899: 50.3443% ( 773) 00:09:31.194 7955.899 - 8013.135: 55.5548% ( 787) 00:09:31.194 8013.135 - 8070.372: 60.6065% ( 763) 00:09:31.194 8070.372 - 8127.609: 65.3072% ( 710) 00:09:31.194 8127.609 - 8184.845: 69.5511% ( 641) 00:09:31.194 8184.845 - 8242.082: 73.2852% ( 564) 00:09:31.195 8242.082 - 8299.319: 76.4036% ( 471) 00:09:31.195 8299.319 - 8356.555: 79.0122% ( 394) 00:09:31.195 8356.555 - 8413.792: 81.1904% ( 329) 00:09:31.195 8413.792 - 8471.029: 83.0442% ( 280) 00:09:31.195 8471.029 - 8528.266: 84.6067% ( 236) 00:09:31.195 8528.266 - 8585.502: 85.7720% ( 176) 00:09:31.195 8585.502 - 8642.739: 86.7585% ( 149) 00:09:31.195 8642.739 - 8699.976: 87.5463% ( 119) 00:09:31.195 8699.976 - 8757.212: 88.1687% ( 94) 00:09:31.195 8757.212 - 8814.449: 88.7315% ( 85) 00:09:31.195 8814.449 - 8871.686: 89.2280% ( 75) 00:09:31.195 8871.686 - 8928.922: 89.5458% ( 48) 00:09:31.195 8928.922 - 8986.159: 89.7974% ( 38) 00:09:31.195 8986.159 - 9043.396: 90.0424% ( 37) 00:09:31.195 9043.396 - 9100.632: 90.2145% ( 26) 00:09:31.195 9100.632 - 9157.869: 90.3668% ( 23) 00:09:31.195 9157.869 - 9215.106: 90.4926% ( 19) 00:09:31.195 9215.106 - 9272.342: 90.6184% ( 19) 00:09:31.195 9272.342 - 9329.579: 90.7905% ( 26) 00:09:31.195 9329.579 - 9386.816: 90.9428% ( 23) 00:09:31.195 9386.816 - 9444.052: 91.0885% ( 22) 00:09:31.195 9444.052 - 9501.289: 91.2672% ( 27) 00:09:31.195 9501.289 - 9558.526: 91.4195% ( 23) 00:09:31.195 9558.526 - 9615.762: 91.5585% ( 21) 00:09:31.195 9615.762 - 9672.999: 91.6976% ( 21) 00:09:31.195 9672.999 - 9730.236: 91.8498% ( 23) 00:09:31.195 9730.236 - 9787.472: 91.9756% ( 19) 00:09:31.195 9787.472 - 9844.709: 92.0948% ( 18) 00:09:31.195 9844.709 - 9901.946: 92.2140% ( 18) 00:09:31.195 9901.946 - 9959.183: 92.2934% ( 12) 00:09:31.195 9959.183 - 10016.419: 92.3861% ( 14) 00:09:31.195 10016.419 - 10073.656: 92.4854% ( 15) 00:09:31.195 10073.656 - 10130.893: 92.5847% ( 15) 00:09:31.195 10130.893 - 10188.129: 92.6907% ( 16) 00:09:31.195 10188.129 - 10245.366: 92.7834% ( 14) 00:09:31.195 10245.366 - 10302.603: 92.8893% ( 16) 00:09:31.195 10302.603 - 10359.839: 92.9952% ( 16) 00:09:31.195 10359.839 - 10417.076: 93.0879% ( 14) 00:09:31.195 10417.076 - 10474.313: 93.1740% ( 13) 00:09:31.195 10474.313 - 10531.549: 93.2601% ( 13) 00:09:31.195 10531.549 - 10588.786: 93.3329% ( 11) 00:09:31.195 10588.786 - 10646.023: 93.3991% ( 10) 00:09:31.195 10646.023 - 10703.259: 93.4653% ( 10) 00:09:31.195 10703.259 - 10760.496: 93.5580% ( 14) 00:09:31.195 10760.496 - 10817.733: 93.6308% ( 11) 00:09:31.195 10817.733 - 10874.969: 93.7235% ( 14) 00:09:31.195 10874.969 - 10932.206: 93.8096% ( 13) 00:09:31.195 10932.206 - 10989.443: 93.9023% ( 14) 00:09:31.195 10989.443 - 11046.679: 93.9950% ( 14) 00:09:31.195 11046.679 - 11103.916: 94.0877% ( 14) 00:09:31.195 11103.916 - 11161.153: 94.1605% ( 11) 00:09:31.195 11161.153 - 11218.390: 94.2466% ( 13) 00:09:31.195 11218.390 - 11275.626: 94.3194% ( 11) 00:09:31.195 11275.626 - 11332.863: 94.3856% ( 10) 00:09:31.195 11332.863 - 11390.100: 94.4518% ( 10) 00:09:31.195 11390.100 - 11447.336: 94.5180% ( 10) 00:09:31.195 11447.336 - 11504.573: 94.5776% ( 9) 00:09:31.195 11504.573 - 11561.810: 94.6438% ( 10) 00:09:31.195 11561.810 - 11619.046: 94.7299% ( 13) 00:09:31.195 11619.046 - 11676.283: 94.8093% ( 12) 00:09:31.195 11676.283 - 11733.520: 94.8755% ( 10) 00:09:31.195 11733.520 - 11790.756: 94.9484% ( 11) 00:09:31.195 11790.756 - 11847.993: 95.0212% ( 11) 00:09:31.195 11847.993 - 11905.230: 95.1006% ( 12) 00:09:31.195 11905.230 - 11962.466: 95.1735% ( 11) 00:09:31.195 11962.466 - 12019.703: 95.2198% ( 7) 00:09:31.195 12019.703 - 12076.940: 95.2728% ( 8) 00:09:31.195 12076.940 - 12134.176: 95.3125% ( 6) 00:09:31.195 12134.176 - 12191.413: 95.3655% ( 8) 00:09:31.195 12191.413 - 12248.650: 95.4184% ( 8) 00:09:31.195 12248.650 - 12305.886: 95.4383% ( 3) 00:09:31.195 12305.886 - 12363.123: 95.4648% ( 4) 00:09:31.195 12363.123 - 12420.360: 95.5045% ( 6) 00:09:31.195 12420.360 - 12477.597: 95.5376% ( 5) 00:09:31.195 12477.597 - 12534.833: 95.5773% ( 6) 00:09:31.195 12534.833 - 12592.070: 95.6369% ( 9) 00:09:31.195 12592.070 - 12649.307: 95.6965% ( 9) 00:09:31.195 12649.307 - 12706.543: 95.7561% ( 9) 00:09:31.195 12706.543 - 12763.780: 95.8422% ( 13) 00:09:31.195 12763.780 - 12821.017: 95.9415% ( 15) 00:09:31.195 12821.017 - 12878.253: 96.0011% ( 9) 00:09:31.195 12878.253 - 12935.490: 96.0871% ( 13) 00:09:31.195 12935.490 - 12992.727: 96.1666% ( 12) 00:09:31.195 12992.727 - 13049.963: 96.2460% ( 12) 00:09:31.195 13049.963 - 13107.200: 96.3255% ( 12) 00:09:31.195 13107.200 - 13164.437: 96.4049% ( 12) 00:09:31.195 13164.437 - 13221.673: 96.4844% ( 12) 00:09:31.195 13221.673 - 13278.910: 96.5506% ( 10) 00:09:31.195 13278.910 - 13336.147: 96.6367% ( 13) 00:09:31.195 13336.147 - 13393.383: 96.7360% ( 15) 00:09:31.195 13393.383 - 13450.620: 96.8419% ( 16) 00:09:31.195 13450.620 - 13507.857: 96.9280% ( 13) 00:09:31.195 13507.857 - 13565.093: 97.0008% ( 11) 00:09:31.195 13565.093 - 13622.330: 97.0802% ( 12) 00:09:31.195 13622.330 - 13679.567: 97.1531% ( 11) 00:09:31.195 13679.567 - 13736.803: 97.1862% ( 5) 00:09:31.195 13736.803 - 13794.040: 97.2127% ( 4) 00:09:31.195 13794.040 - 13851.277: 97.2391% ( 4) 00:09:31.195 13851.277 - 13908.514: 97.2590% ( 3) 00:09:31.195 13908.514 - 13965.750: 97.2855% ( 4) 00:09:31.195 13965.750 - 14022.987: 97.3053% ( 3) 00:09:31.195 14022.987 - 14080.224: 97.3318% ( 4) 00:09:31.195 14080.224 - 14137.460: 97.3517% ( 3) 00:09:31.195 14137.460 - 14194.697: 97.3782% ( 4) 00:09:31.195 14194.697 - 14251.934: 97.3980% ( 3) 00:09:31.195 14251.934 - 14309.170: 97.4179% ( 3) 00:09:31.195 14309.170 - 14366.407: 97.4444% ( 4) 00:09:31.195 14366.407 - 14423.644: 97.4576% ( 2) 00:09:31.195 15224.957 - 15339.431: 97.4775% ( 3) 00:09:31.195 15339.431 - 15453.904: 97.5106% ( 5) 00:09:31.195 15453.904 - 15568.377: 97.5768% ( 10) 00:09:31.195 15568.377 - 15682.851: 97.6430% ( 10) 00:09:31.195 15682.851 - 15797.324: 97.7092% ( 10) 00:09:31.195 15797.324 - 15911.797: 97.7953% ( 13) 00:09:31.195 15911.797 - 16026.271: 97.9476% ( 23) 00:09:31.195 16026.271 - 16140.744: 98.0800% ( 20) 00:09:31.195 16140.744 - 16255.217: 98.2190% ( 21) 00:09:31.195 16255.217 - 16369.691: 98.3514% ( 20) 00:09:31.195 16369.691 - 16484.164: 98.4772% ( 19) 00:09:31.195 16484.164 - 16598.638: 98.5699% ( 14) 00:09:31.195 16598.638 - 16713.111: 98.6626% ( 14) 00:09:31.195 16713.111 - 16827.584: 98.7553% ( 14) 00:09:31.195 16827.584 - 16942.058: 98.8414% ( 13) 00:09:31.195 16942.058 - 17056.531: 98.9010% ( 9) 00:09:31.195 17056.531 - 17171.004: 98.9473% ( 7) 00:09:31.195 17171.004 - 17285.478: 98.9936% ( 7) 00:09:31.195 17285.478 - 17399.951: 99.0400% ( 7) 00:09:31.195 17399.951 - 17514.424: 99.0797% ( 6) 00:09:31.195 17514.424 - 17628.898: 99.1128% ( 5) 00:09:31.195 17628.898 - 17743.371: 99.1393% ( 4) 00:09:31.195 17743.371 - 17857.845: 99.1525% ( 2) 00:09:31.195 23009.146 - 23123.619: 99.1790% ( 4) 00:09:31.195 23123.619 - 23238.093: 99.2121% ( 5) 00:09:31.195 23238.093 - 23352.566: 99.2452% ( 5) 00:09:31.195 23352.566 - 23467.039: 99.2783% ( 5) 00:09:31.195 23467.039 - 23581.513: 99.3048% ( 4) 00:09:31.195 23581.513 - 23695.986: 99.3379% ( 5) 00:09:31.195 23695.986 - 23810.459: 99.3710% ( 5) 00:09:31.195 23810.459 - 23924.933: 99.3975% ( 4) 00:09:31.195 23924.933 - 24039.406: 99.4306% ( 5) 00:09:31.195 24039.406 - 24153.879: 99.4637% ( 5) 00:09:31.195 24153.879 - 24268.353: 99.4902% ( 4) 00:09:31.195 24268.353 - 24382.826: 99.5299% ( 6) 00:09:31.195 24382.826 - 24497.300: 99.5564% ( 4) 00:09:31.195 24497.300 - 24611.773: 99.5763% ( 3) 00:09:31.195 29992.021 - 30220.968: 99.6226% ( 7) 00:09:31.195 30220.968 - 30449.914: 99.6822% ( 9) 00:09:31.195 30449.914 - 30678.861: 99.7418% ( 9) 00:09:31.195 30678.861 - 30907.808: 99.8014% ( 9) 00:09:31.195 30907.808 - 31136.755: 99.8610% ( 9) 00:09:31.195 31136.755 - 31365.701: 99.9139% ( 8) 00:09:31.195 31365.701 - 31594.648: 99.9801% ( 10) 00:09:31.195 31594.648 - 31823.595: 100.0000% ( 3) 00:09:31.195 00:09:31.195 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:31.195 ============================================================================== 00:09:31.195 Range in us Cumulative IO count 00:09:31.195 4121.041 - 4149.659: 0.0331% ( 5) 00:09:31.195 4149.659 - 4178.278: 0.0794% ( 7) 00:09:31.195 4178.278 - 4206.896: 0.0861% ( 1) 00:09:31.195 4235.514 - 4264.133: 0.0993% ( 2) 00:09:31.195 4264.133 - 4292.751: 0.1126% ( 2) 00:09:31.195 4292.751 - 4321.369: 0.1324% ( 3) 00:09:31.195 4321.369 - 4349.988: 0.1457% ( 2) 00:09:31.195 4349.988 - 4378.606: 0.1589% ( 2) 00:09:31.195 4378.606 - 4407.224: 0.1788% ( 3) 00:09:31.195 4407.224 - 4435.843: 0.1920% ( 2) 00:09:31.195 4435.843 - 4464.461: 0.2052% ( 2) 00:09:31.195 4464.461 - 4493.079: 0.2251% ( 3) 00:09:31.195 4493.079 - 4521.698: 0.2383% ( 2) 00:09:31.195 4521.698 - 4550.316: 0.2516% ( 2) 00:09:31.195 4550.316 - 4578.934: 0.2715% ( 3) 00:09:31.195 4578.934 - 4607.553: 0.2847% ( 2) 00:09:31.195 4607.553 - 4636.171: 0.3046% ( 3) 00:09:31.195 4636.171 - 4664.790: 0.3178% ( 2) 00:09:31.195 4664.790 - 4693.408: 0.3377% ( 3) 00:09:31.195 4693.408 - 4722.026: 0.3509% ( 2) 00:09:31.195 4722.026 - 4750.645: 0.3708% ( 3) 00:09:31.195 4750.645 - 4779.263: 0.3840% ( 2) 00:09:31.195 4779.263 - 4807.881: 0.3972% ( 2) 00:09:31.195 4807.881 - 4836.500: 0.4171% ( 3) 00:09:31.195 4836.500 - 4865.118: 0.4237% ( 1) 00:09:31.195 6324.653 - 6353.272: 0.4370% ( 2) 00:09:31.195 6353.272 - 6381.890: 0.4899% ( 8) 00:09:31.195 6381.890 - 6410.508: 0.4966% ( 1) 00:09:31.195 6439.127 - 6467.745: 0.5098% ( 2) 00:09:31.195 6467.745 - 6496.363: 0.5297% ( 3) 00:09:31.195 6496.363 - 6524.982: 0.5429% ( 2) 00:09:31.195 6524.982 - 6553.600: 0.5561% ( 2) 00:09:31.195 6553.600 - 6582.218: 0.5760% ( 3) 00:09:31.195 6582.218 - 6610.837: 0.5892% ( 2) 00:09:31.195 6610.837 - 6639.455: 0.6025% ( 2) 00:09:31.195 6639.455 - 6668.073: 0.6224% ( 3) 00:09:31.195 6668.073 - 6696.692: 0.6356% ( 2) 00:09:31.195 6696.692 - 6725.310: 0.6555% ( 3) 00:09:31.195 6725.310 - 6753.928: 0.6687% ( 2) 00:09:31.195 6753.928 - 6782.547: 0.6819% ( 2) 00:09:31.195 6782.547 - 6811.165: 0.7018% ( 3) 00:09:31.195 6811.165 - 6839.783: 0.7150% ( 2) 00:09:31.196 6839.783 - 6868.402: 0.7349% ( 3) 00:09:31.196 6868.402 - 6897.020: 0.7481% ( 2) 00:09:31.196 6897.020 - 6925.638: 0.7680% ( 3) 00:09:31.196 6925.638 - 6954.257: 0.7812% ( 2) 00:09:31.196 6954.257 - 6982.875: 0.8011% ( 3) 00:09:31.196 6982.875 - 7011.493: 0.8144% ( 2) 00:09:31.196 7011.493 - 7040.112: 0.8342% ( 3) 00:09:31.196 7040.112 - 7068.730: 0.8938% ( 9) 00:09:31.196 7068.730 - 7097.348: 0.9865% ( 14) 00:09:31.196 7097.348 - 7125.967: 1.1123% ( 19) 00:09:31.196 7125.967 - 7154.585: 1.3308% ( 33) 00:09:31.196 7154.585 - 7183.203: 1.6552% ( 49) 00:09:31.196 7183.203 - 7211.822: 2.0855% ( 65) 00:09:31.196 7211.822 - 7240.440: 2.6152% ( 80) 00:09:31.196 7240.440 - 7269.059: 3.2707% ( 99) 00:09:31.196 7269.059 - 7297.677: 4.0585% ( 119) 00:09:31.196 7297.677 - 7326.295: 5.0450% ( 149) 00:09:31.196 7326.295 - 7383.532: 7.1901% ( 324) 00:09:31.196 7383.532 - 7440.769: 9.8914% ( 408) 00:09:31.196 7440.769 - 7498.005: 13.3011% ( 515) 00:09:31.196 7498.005 - 7555.242: 17.1809% ( 586) 00:09:31.196 7555.242 - 7612.479: 21.2791% ( 619) 00:09:31.196 7612.479 - 7669.715: 25.6555% ( 661) 00:09:31.196 7669.715 - 7726.952: 30.3363% ( 707) 00:09:31.196 7726.952 - 7784.189: 35.1364% ( 725) 00:09:31.196 7784.189 - 7841.425: 40.1417% ( 756) 00:09:31.196 7841.425 - 7898.662: 45.3390% ( 785) 00:09:31.196 7898.662 - 7955.899: 50.6886% ( 808) 00:09:31.196 7955.899 - 8013.135: 55.7468% ( 764) 00:09:31.196 8013.135 - 8070.372: 60.6396% ( 739) 00:09:31.196 8070.372 - 8127.609: 65.2278% ( 693) 00:09:31.196 8127.609 - 8184.845: 69.4981% ( 645) 00:09:31.196 8184.845 - 8242.082: 73.2389% ( 565) 00:09:31.196 8242.082 - 8299.319: 76.4102% ( 479) 00:09:31.196 8299.319 - 8356.555: 79.0254% ( 395) 00:09:31.196 8356.555 - 8413.792: 81.1374% ( 319) 00:09:31.196 8413.792 - 8471.029: 82.8986% ( 266) 00:09:31.196 8471.029 - 8528.266: 84.3220% ( 215) 00:09:31.196 8528.266 - 8585.502: 85.5336% ( 183) 00:09:31.196 8585.502 - 8642.739: 86.3943% ( 130) 00:09:31.196 8642.739 - 8699.976: 87.1690% ( 117) 00:09:31.196 8699.976 - 8757.212: 87.7847% ( 93) 00:09:31.196 8757.212 - 8814.449: 88.3011% ( 78) 00:09:31.196 8814.449 - 8871.686: 88.7381% ( 66) 00:09:31.196 8871.686 - 8928.922: 89.1353% ( 60) 00:09:31.196 8928.922 - 8986.159: 89.4465% ( 47) 00:09:31.196 8986.159 - 9043.396: 89.7643% ( 48) 00:09:31.196 9043.396 - 9100.632: 89.9629% ( 30) 00:09:31.196 9100.632 - 9157.869: 90.1483% ( 28) 00:09:31.196 9157.869 - 9215.106: 90.3602% ( 32) 00:09:31.196 9215.106 - 9272.342: 90.5787% ( 33) 00:09:31.196 9272.342 - 9329.579: 90.7971% ( 33) 00:09:31.196 9329.579 - 9386.816: 91.0355% ( 36) 00:09:31.196 9386.816 - 9444.052: 91.2209% ( 28) 00:09:31.196 9444.052 - 9501.289: 91.4129% ( 29) 00:09:31.196 9501.289 - 9558.526: 91.5916% ( 27) 00:09:31.196 9558.526 - 9615.762: 91.7770% ( 28) 00:09:31.196 9615.762 - 9672.999: 91.9293% ( 23) 00:09:31.196 9672.999 - 9730.236: 92.0683% ( 21) 00:09:31.196 9730.236 - 9787.472: 92.2007% ( 20) 00:09:31.196 9787.472 - 9844.709: 92.3265% ( 19) 00:09:31.196 9844.709 - 9901.946: 92.4258% ( 15) 00:09:31.196 9901.946 - 9959.183: 92.5318% ( 16) 00:09:31.196 9959.183 - 10016.419: 92.6443% ( 17) 00:09:31.196 10016.419 - 10073.656: 92.7436% ( 15) 00:09:31.196 10073.656 - 10130.893: 92.8562% ( 17) 00:09:31.196 10130.893 - 10188.129: 92.9555% ( 15) 00:09:31.196 10188.129 - 10245.366: 93.0416% ( 13) 00:09:31.196 10245.366 - 10302.603: 93.1343% ( 14) 00:09:31.196 10302.603 - 10359.839: 93.2270% ( 14) 00:09:31.196 10359.839 - 10417.076: 93.2667% ( 6) 00:09:31.196 10417.076 - 10474.313: 93.3263% ( 9) 00:09:31.196 10474.313 - 10531.549: 93.3925% ( 10) 00:09:31.196 10531.549 - 10588.786: 93.4587% ( 10) 00:09:31.196 10588.786 - 10646.023: 93.5183% ( 9) 00:09:31.196 10646.023 - 10703.259: 93.5977% ( 12) 00:09:31.196 10703.259 - 10760.496: 93.6772% ( 12) 00:09:31.196 10760.496 - 10817.733: 93.7765% ( 15) 00:09:31.196 10817.733 - 10874.969: 93.8758% ( 15) 00:09:31.196 10874.969 - 10932.206: 93.9751% ( 15) 00:09:31.196 10932.206 - 10989.443: 94.0546% ( 12) 00:09:31.196 10989.443 - 11046.679: 94.1274% ( 11) 00:09:31.196 11046.679 - 11103.916: 94.2201% ( 14) 00:09:31.196 11103.916 - 11161.153: 94.3128% ( 14) 00:09:31.196 11161.153 - 11218.390: 94.3922% ( 12) 00:09:31.196 11218.390 - 11275.626: 94.4650% ( 11) 00:09:31.196 11275.626 - 11332.863: 94.5445% ( 12) 00:09:31.196 11332.863 - 11390.100: 94.6173% ( 11) 00:09:31.196 11390.100 - 11447.336: 94.6968% ( 12) 00:09:31.196 11447.336 - 11504.573: 94.7696% ( 11) 00:09:31.196 11504.573 - 11561.810: 94.8424% ( 11) 00:09:31.196 11561.810 - 11619.046: 94.9285% ( 13) 00:09:31.196 11619.046 - 11676.283: 95.0013% ( 11) 00:09:31.196 11676.283 - 11733.520: 95.0609% ( 9) 00:09:31.196 11733.520 - 11790.756: 95.1139% ( 8) 00:09:31.196 11790.756 - 11847.993: 95.1735% ( 9) 00:09:31.196 11847.993 - 11905.230: 95.2264% ( 8) 00:09:31.196 11905.230 - 11962.466: 95.2860% ( 9) 00:09:31.196 11962.466 - 12019.703: 95.3390% ( 8) 00:09:31.196 12019.703 - 12076.940: 95.3853% ( 7) 00:09:31.196 12076.940 - 12134.176: 95.4317% ( 7) 00:09:31.196 12134.176 - 12191.413: 95.4846% ( 8) 00:09:31.196 12191.413 - 12248.650: 95.5376% ( 8) 00:09:31.196 12248.650 - 12305.886: 95.5840% ( 7) 00:09:31.196 12305.886 - 12363.123: 95.6303% ( 7) 00:09:31.196 12363.123 - 12420.360: 95.6634% ( 5) 00:09:31.196 12420.360 - 12477.597: 95.7097% ( 7) 00:09:31.196 12477.597 - 12534.833: 95.7561% ( 7) 00:09:31.196 12534.833 - 12592.070: 95.7958% ( 6) 00:09:31.196 12592.070 - 12649.307: 95.8422% ( 7) 00:09:31.196 12649.307 - 12706.543: 95.8819% ( 6) 00:09:31.196 12706.543 - 12763.780: 95.9216% ( 6) 00:09:31.196 12763.780 - 12821.017: 95.9680% ( 7) 00:09:31.196 12821.017 - 12878.253: 96.0011% ( 5) 00:09:31.196 12878.253 - 12935.490: 96.0342% ( 5) 00:09:31.196 12935.490 - 12992.727: 96.0739% ( 6) 00:09:31.196 12992.727 - 13049.963: 96.1202% ( 7) 00:09:31.196 13049.963 - 13107.200: 96.1600% ( 6) 00:09:31.196 13107.200 - 13164.437: 96.1997% ( 6) 00:09:31.196 13164.437 - 13221.673: 96.2460% ( 7) 00:09:31.196 13221.673 - 13278.910: 96.3255% ( 12) 00:09:31.196 13278.910 - 13336.147: 96.3983% ( 11) 00:09:31.196 13336.147 - 13393.383: 96.4579% ( 9) 00:09:31.196 13393.383 - 13450.620: 96.5440% ( 13) 00:09:31.196 13450.620 - 13507.857: 96.6035% ( 9) 00:09:31.196 13507.857 - 13565.093: 96.6764% ( 11) 00:09:31.196 13565.093 - 13622.330: 96.7624% ( 13) 00:09:31.196 13622.330 - 13679.567: 96.8287% ( 10) 00:09:31.196 13679.567 - 13736.803: 96.9081% ( 12) 00:09:31.196 13736.803 - 13794.040: 96.9743% ( 10) 00:09:31.196 13794.040 - 13851.277: 97.0538% ( 12) 00:09:31.196 13851.277 - 13908.514: 97.1332% ( 12) 00:09:31.196 13908.514 - 13965.750: 97.2127% ( 12) 00:09:31.196 13965.750 - 14022.987: 97.2590% ( 7) 00:09:31.196 14022.987 - 14080.224: 97.3120% ( 8) 00:09:31.196 14080.224 - 14137.460: 97.3583% ( 7) 00:09:31.196 14137.460 - 14194.697: 97.4113% ( 8) 00:09:31.196 14194.697 - 14251.934: 97.4444% ( 5) 00:09:31.196 14251.934 - 14309.170: 97.4576% ( 2) 00:09:31.196 14652.590 - 14767.064: 97.4709% ( 2) 00:09:31.196 14767.064 - 14881.537: 97.5040% ( 5) 00:09:31.196 14881.537 - 14996.010: 97.5371% ( 5) 00:09:31.196 14996.010 - 15110.484: 97.6033% ( 10) 00:09:31.196 15110.484 - 15224.957: 97.6695% ( 10) 00:09:31.196 15224.957 - 15339.431: 97.7291% ( 9) 00:09:31.196 15339.431 - 15453.904: 97.8019% ( 11) 00:09:31.196 15453.904 - 15568.377: 97.9078% ( 16) 00:09:31.196 15568.377 - 15682.851: 98.0336% ( 19) 00:09:31.196 15682.851 - 15797.324: 98.1396% ( 16) 00:09:31.196 15797.324 - 15911.797: 98.2455% ( 16) 00:09:31.196 15911.797 - 16026.271: 98.3581% ( 17) 00:09:31.196 16026.271 - 16140.744: 98.4309% ( 11) 00:09:31.196 16140.744 - 16255.217: 98.5037% ( 11) 00:09:31.196 16255.217 - 16369.691: 98.5699% ( 10) 00:09:31.196 16369.691 - 16484.164: 98.6361% ( 10) 00:09:31.196 16484.164 - 16598.638: 98.6494% ( 2) 00:09:31.196 16598.638 - 16713.111: 98.6758% ( 4) 00:09:31.196 16713.111 - 16827.584: 98.6957% ( 3) 00:09:31.196 16827.584 - 16942.058: 98.7421% ( 7) 00:09:31.196 16942.058 - 17056.531: 98.7818% ( 6) 00:09:31.196 17056.531 - 17171.004: 98.8083% ( 4) 00:09:31.196 17171.004 - 17285.478: 98.8281% ( 3) 00:09:31.196 17285.478 - 17399.951: 98.8546% ( 4) 00:09:31.196 17399.951 - 17514.424: 98.8811% ( 4) 00:09:31.196 17514.424 - 17628.898: 98.9076% ( 4) 00:09:31.196 17628.898 - 17743.371: 98.9274% ( 3) 00:09:31.196 17743.371 - 17857.845: 98.9539% ( 4) 00:09:31.196 17857.845 - 17972.318: 98.9804% ( 4) 00:09:31.196 17972.318 - 18086.791: 99.0003% ( 3) 00:09:31.196 18086.791 - 18201.265: 99.0267% ( 4) 00:09:31.196 18201.265 - 18315.738: 99.0532% ( 4) 00:09:31.196 18315.738 - 18430.211: 99.0930% ( 6) 00:09:31.196 18430.211 - 18544.685: 99.1261% ( 5) 00:09:31.196 18544.685 - 18659.158: 99.1525% ( 4) 00:09:31.196 22436.779 - 22551.252: 99.1724% ( 3) 00:09:31.196 22551.252 - 22665.726: 99.1989% ( 4) 00:09:31.196 22665.726 - 22780.199: 99.2320% ( 5) 00:09:31.196 22780.199 - 22894.672: 99.2651% ( 5) 00:09:31.196 22894.672 - 23009.146: 99.2982% ( 5) 00:09:31.196 23009.146 - 23123.619: 99.3247% ( 4) 00:09:31.196 23123.619 - 23238.093: 99.3578% ( 5) 00:09:31.196 23238.093 - 23352.566: 99.3843% ( 4) 00:09:31.196 23352.566 - 23467.039: 99.4041% ( 3) 00:09:31.196 23467.039 - 23581.513: 99.4439% ( 6) 00:09:31.196 23581.513 - 23695.986: 99.4703% ( 4) 00:09:31.196 23695.986 - 23810.459: 99.4968% ( 4) 00:09:31.196 23810.459 - 23924.933: 99.5299% ( 5) 00:09:31.196 23924.933 - 24039.406: 99.5630% ( 5) 00:09:31.196 24039.406 - 24153.879: 99.5763% ( 2) 00:09:31.196 29534.128 - 29763.074: 99.6359% ( 9) 00:09:31.196 29763.074 - 29992.021: 99.7021% ( 10) 00:09:31.196 29992.021 - 30220.968: 99.7617% ( 9) 00:09:31.196 30220.968 - 30449.914: 99.8080% ( 7) 00:09:31.196 30449.914 - 30678.861: 99.8742% ( 10) 00:09:31.196 30678.861 - 30907.808: 99.9338% ( 9) 00:09:31.196 30907.808 - 31136.755: 99.9934% ( 9) 00:09:31.196 31136.755 - 31365.701: 100.0000% ( 1) 00:09:31.196 00:09:31.196 11:51:29 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:32.137 Initializing NVMe Controllers 00:09:32.137 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:32.137 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:32.137 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:32.137 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:32.137 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:32.137 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:32.137 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:32.137 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:32.137 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:32.137 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:32.137 Initialization complete. Launching workers. 00:09:32.137 ======================================================== 00:09:32.137 Latency(us) 00:09:32.137 Device Information : IOPS MiB/s Average min max 00:09:32.137 PCIE (0000:00:10.0) NSID 1 from core 0: 9117.55 106.85 14049.73 8084.42 40671.66 00:09:32.137 PCIE (0000:00:11.0) NSID 1 from core 0: 9117.55 106.85 14039.10 7821.70 40936.73 00:09:32.137 PCIE (0000:00:13.0) NSID 1 from core 0: 9117.55 106.85 14026.88 6368.66 41529.23 00:09:32.137 PCIE (0000:00:12.0) NSID 1 from core 0: 9117.55 106.85 14014.23 6309.58 41339.46 00:09:32.137 PCIE (0000:00:12.0) NSID 2 from core 0: 9117.55 106.85 14001.16 5470.57 40875.73 00:09:32.137 PCIE (0000:00:12.0) NSID 3 from core 0: 9117.55 106.85 13988.07 5135.88 41817.84 00:09:32.137 ======================================================== 00:09:32.137 Total : 54705.32 641.08 14019.86 5135.88 41817.84 00:09:32.137 00:09:32.137 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:32.137 ================================================================================= 00:09:32.137 1.00000% : 9901.946us 00:09:32.137 10.00000% : 10874.969us 00:09:32.137 25.00000% : 11561.810us 00:09:32.137 50.00000% : 13736.803us 00:09:32.137 75.00000% : 15453.904us 00:09:32.137 90.00000% : 17514.424us 00:09:32.137 95.00000% : 19002.578us 00:09:32.137 98.00000% : 20834.152us 00:09:32.137 99.00000% : 27817.027us 00:09:32.137 99.50000% : 38920.943us 00:09:32.137 99.90000% : 40294.624us 00:09:32.137 99.99000% : 40752.517us 00:09:32.137 99.99900% : 40752.517us 00:09:32.137 99.99990% : 40752.517us 00:09:32.137 99.99999% : 40752.517us 00:09:32.137 00:09:32.137 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:32.137 ================================================================================= 00:09:32.137 1.00000% : 10073.656us 00:09:32.137 10.00000% : 10932.206us 00:09:32.137 25.00000% : 11504.573us 00:09:32.137 50.00000% : 13736.803us 00:09:32.137 75.00000% : 15453.904us 00:09:32.137 90.00000% : 17399.951us 00:09:32.137 95.00000% : 18888.105us 00:09:32.137 98.00000% : 21063.099us 00:09:32.137 99.00000% : 28847.287us 00:09:32.137 99.50000% : 39378.837us 00:09:32.137 99.90000% : 40752.517us 00:09:32.137 99.99000% : 40981.464us 00:09:32.137 99.99900% : 40981.464us 00:09:32.137 99.99990% : 40981.464us 00:09:32.137 99.99999% : 40981.464us 00:09:32.137 00:09:32.137 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:32.137 ================================================================================= 00:09:32.137 1.00000% : 9844.709us 00:09:32.137 10.00000% : 10874.969us 00:09:32.137 25.00000% : 11504.573us 00:09:32.137 50.00000% : 13736.803us 00:09:32.137 75.00000% : 15339.431us 00:09:32.137 90.00000% : 17628.898us 00:09:32.137 95.00000% : 18659.158us 00:09:32.137 98.00000% : 21292.045us 00:09:32.137 99.00000% : 29534.128us 00:09:32.137 99.50000% : 40065.677us 00:09:32.137 99.90000% : 41439.357us 00:09:32.137 99.99000% : 41668.304us 00:09:32.137 99.99900% : 41668.304us 00:09:32.137 99.99990% : 41668.304us 00:09:32.137 99.99999% : 41668.304us 00:09:32.137 00:09:32.137 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:32.137 ================================================================================= 00:09:32.137 1.00000% : 9730.236us 00:09:32.137 10.00000% : 10932.206us 00:09:32.137 25.00000% : 11561.810us 00:09:32.137 50.00000% : 13736.803us 00:09:32.137 75.00000% : 15339.431us 00:09:32.137 90.00000% : 17514.424us 00:09:32.137 95.00000% : 19117.052us 00:09:32.137 98.00000% : 21520.992us 00:09:32.137 99.00000% : 29992.021us 00:09:32.137 99.50000% : 40294.624us 00:09:32.137 99.90000% : 41210.410us 00:09:32.137 99.99000% : 41439.357us 00:09:32.137 99.99900% : 41439.357us 00:09:32.137 99.99990% : 41439.357us 00:09:32.137 99.99999% : 41439.357us 00:09:32.137 00:09:32.137 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:32.137 ================================================================================= 00:09:32.137 1.00000% : 9386.816us 00:09:32.137 10.00000% : 10932.206us 00:09:32.137 25.00000% : 11619.046us 00:09:32.137 50.00000% : 13794.040us 00:09:32.137 75.00000% : 15453.904us 00:09:32.137 90.00000% : 17514.424us 00:09:32.137 95.00000% : 19231.525us 00:09:32.137 98.00000% : 20490.732us 00:09:32.137 99.00000% : 30220.968us 00:09:32.137 99.50000% : 40294.624us 00:09:32.137 99.90000% : 40752.517us 00:09:32.137 99.99000% : 40981.464us 00:09:32.137 99.99900% : 40981.464us 00:09:32.137 99.99990% : 40981.464us 00:09:32.137 99.99999% : 40981.464us 00:09:32.137 00:09:32.137 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:32.137 ================================================================================= 00:09:32.137 1.00000% : 8757.212us 00:09:32.137 10.00000% : 10874.969us 00:09:32.137 25.00000% : 11619.046us 00:09:32.137 50.00000% : 13736.803us 00:09:32.137 75.00000% : 15339.431us 00:09:32.137 90.00000% : 17628.898us 00:09:32.137 95.00000% : 18888.105us 00:09:32.137 98.00000% : 20261.785us 00:09:32.137 99.00000% : 30678.861us 00:09:32.137 99.50000% : 40065.677us 00:09:32.137 99.90000% : 41668.304us 00:09:32.137 99.99000% : 41897.251us 00:09:32.137 99.99900% : 41897.251us 00:09:32.137 99.99990% : 41897.251us 00:09:32.137 99.99999% : 41897.251us 00:09:32.137 00:09:32.137 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:32.137 ============================================================================== 00:09:32.137 Range in us Cumulative IO count 00:09:32.137 8070.372 - 8127.609: 0.0328% ( 3) 00:09:32.137 8127.609 - 8184.845: 0.0765% ( 4) 00:09:32.137 8184.845 - 8242.082: 0.1093% ( 3) 00:09:32.137 8242.082 - 8299.319: 0.1420% ( 3) 00:09:32.137 8299.319 - 8356.555: 0.1748% ( 3) 00:09:32.137 8356.555 - 8413.792: 0.2513% ( 7) 00:09:32.137 8413.792 - 8471.029: 0.2841% ( 3) 00:09:32.137 8471.029 - 8528.266: 0.3606% ( 7) 00:09:32.137 8528.266 - 8585.502: 0.5245% ( 15) 00:09:32.137 8585.502 - 8642.739: 0.5682% ( 4) 00:09:32.137 8642.739 - 8699.976: 0.6010% ( 3) 00:09:32.137 8871.686 - 8928.922: 0.6119% ( 1) 00:09:32.137 8928.922 - 8986.159: 0.6556% ( 4) 00:09:32.137 8986.159 - 9043.396: 0.6884% ( 3) 00:09:32.137 9043.396 - 9100.632: 0.6993% ( 1) 00:09:32.137 9558.526 - 9615.762: 0.7102% ( 1) 00:09:32.137 9730.236 - 9787.472: 0.7649% ( 5) 00:09:32.137 9787.472 - 9844.709: 0.8741% ( 10) 00:09:32.137 9844.709 - 9901.946: 1.1036% ( 21) 00:09:32.137 9901.946 - 9959.183: 1.3877% ( 26) 00:09:32.137 9959.183 - 10016.419: 1.6390% ( 23) 00:09:32.137 10016.419 - 10073.656: 1.8575% ( 20) 00:09:32.137 10073.656 - 10130.893: 2.2618% ( 37) 00:09:32.137 10130.893 - 10188.129: 2.5350% ( 25) 00:09:32.137 10188.129 - 10245.366: 3.0157% ( 44) 00:09:32.137 10245.366 - 10302.603: 3.4528% ( 40) 00:09:32.137 10302.603 - 10359.839: 3.7260% ( 25) 00:09:32.137 10359.839 - 10417.076: 4.1412% ( 38) 00:09:32.137 10417.076 - 10474.313: 4.6547% ( 47) 00:09:32.137 10474.313 - 10531.549: 5.2775% ( 57) 00:09:32.137 10531.549 - 10588.786: 5.9768% ( 64) 00:09:32.137 10588.786 - 10646.023: 6.6652% ( 63) 00:09:32.137 10646.023 - 10703.259: 7.3536% ( 63) 00:09:32.137 10703.259 - 10760.496: 8.2496% ( 82) 00:09:32.137 10760.496 - 10817.733: 9.1565% ( 83) 00:09:32.137 10817.733 - 10874.969: 10.1289% ( 89) 00:09:32.137 10874.969 - 10932.206: 11.1779% ( 96) 00:09:32.137 10932.206 - 10989.443: 12.1394% ( 88) 00:09:32.137 10989.443 - 11046.679: 13.0135% ( 80) 00:09:32.137 11046.679 - 11103.916: 14.0188% ( 92) 00:09:32.137 11103.916 - 11161.153: 15.2753% ( 115) 00:09:32.137 11161.153 - 11218.390: 16.6302% ( 124) 00:09:32.137 11218.390 - 11275.626: 18.2146% ( 145) 00:09:32.137 11275.626 - 11332.863: 20.2251% ( 184) 00:09:32.137 11332.863 - 11390.100: 21.9078% ( 154) 00:09:32.137 11390.100 - 11447.336: 23.0441% ( 104) 00:09:32.137 11447.336 - 11504.573: 24.0166% ( 89) 00:09:32.137 11504.573 - 11561.810: 25.0000% ( 90) 00:09:32.137 11561.810 - 11619.046: 25.6774% ( 62) 00:09:32.137 11619.046 - 11676.283: 26.3658% ( 63) 00:09:32.137 11676.283 - 11733.520: 27.2072% ( 77) 00:09:32.137 11733.520 - 11790.756: 27.7098% ( 46) 00:09:32.137 11790.756 - 11847.993: 28.3217% ( 56) 00:09:32.137 11847.993 - 11905.230: 28.9226% ( 55) 00:09:32.137 11905.230 - 11962.466: 29.9388% ( 93) 00:09:32.137 11962.466 - 12019.703: 30.5507% ( 56) 00:09:32.137 12019.703 - 12076.940: 31.0970% ( 50) 00:09:32.137 12076.940 - 12134.176: 31.7635% ( 61) 00:09:32.137 12134.176 - 12191.413: 32.2880% ( 48) 00:09:32.137 12191.413 - 12248.650: 32.9873% ( 64) 00:09:32.137 12248.650 - 12305.886: 33.7740% ( 72) 00:09:32.137 12305.886 - 12363.123: 34.5280% ( 69) 00:09:32.137 12363.123 - 12420.360: 35.1399% ( 56) 00:09:32.137 12420.360 - 12477.597: 35.6643% ( 48) 00:09:32.137 12477.597 - 12534.833: 36.2216% ( 51) 00:09:32.137 12534.833 - 12592.070: 36.6914% ( 43) 00:09:32.137 12592.070 - 12649.307: 37.2268% ( 49) 00:09:32.137 12649.307 - 12706.543: 37.6748% ( 41) 00:09:32.137 12706.543 - 12763.780: 38.1119% ( 40) 00:09:32.137 12763.780 - 12821.017: 38.6582% ( 50) 00:09:32.137 12821.017 - 12878.253: 39.1281% ( 43) 00:09:32.137 12878.253 - 12935.490: 39.7509% ( 57) 00:09:32.137 12935.490 - 12992.727: 40.3300% ( 53) 00:09:32.137 12992.727 - 13049.963: 40.9091% ( 53) 00:09:32.137 13049.963 - 13107.200: 41.7177% ( 74) 00:09:32.137 13107.200 - 13164.437: 42.3733% ( 60) 00:09:32.137 13164.437 - 13221.673: 42.9305% ( 51) 00:09:32.137 13221.673 - 13278.910: 43.5205% ( 54) 00:09:32.137 13278.910 - 13336.147: 44.1652% ( 59) 00:09:32.137 13336.147 - 13393.383: 44.7225% ( 51) 00:09:32.137 13393.383 - 13450.620: 45.6840% ( 88) 00:09:32.137 13450.620 - 13507.857: 46.8969% ( 111) 00:09:32.137 13507.857 - 13565.093: 48.0660% ( 107) 00:09:32.137 13565.093 - 13622.330: 49.0603% ( 91) 00:09:32.137 13622.330 - 13679.567: 49.8907% ( 76) 00:09:32.137 13679.567 - 13736.803: 50.6337% ( 68) 00:09:32.137 13736.803 - 13794.040: 51.5297% ( 82) 00:09:32.137 13794.040 - 13851.277: 52.4585% ( 85) 00:09:32.137 13851.277 - 13908.514: 53.2998% ( 77) 00:09:32.137 13908.514 - 13965.750: 54.3269% ( 94) 00:09:32.137 13965.750 - 14022.987: 55.3759% ( 96) 00:09:32.137 14022.987 - 14080.224: 56.3483% ( 89) 00:09:32.137 14080.224 - 14137.460: 57.6923% ( 123) 00:09:32.137 14137.460 - 14194.697: 58.9052% ( 111) 00:09:32.137 14194.697 - 14251.934: 59.9760% ( 98) 00:09:32.137 14251.934 - 14309.170: 61.0358% ( 97) 00:09:32.137 14309.170 - 14366.407: 61.9209% ( 81) 00:09:32.137 14366.407 - 14423.644: 62.7950% ( 80) 00:09:32.137 14423.644 - 14480.880: 63.8767% ( 99) 00:09:32.137 14480.880 - 14538.117: 64.8492% ( 89) 00:09:32.137 14538.117 - 14595.354: 65.7015% ( 78) 00:09:32.137 14595.354 - 14652.590: 66.5319% ( 76) 00:09:32.137 14652.590 - 14767.064: 67.8759% ( 123) 00:09:32.137 14767.064 - 14881.537: 69.0341% ( 106) 00:09:32.137 14881.537 - 14996.010: 70.1377% ( 101) 00:09:32.137 14996.010 - 15110.484: 71.5472% ( 129) 00:09:32.137 15110.484 - 15224.957: 72.9567% ( 129) 00:09:32.137 15224.957 - 15339.431: 74.1914% ( 113) 00:09:32.137 15339.431 - 15453.904: 75.4589% ( 116) 00:09:32.137 15453.904 - 15568.377: 76.6936% ( 113) 00:09:32.137 15568.377 - 15682.851: 77.7972% ( 101) 00:09:32.137 15682.851 - 15797.324: 78.8134% ( 93) 00:09:32.137 15797.324 - 15911.797: 79.8186% ( 92) 00:09:32.137 15911.797 - 16026.271: 80.7692% ( 87) 00:09:32.137 16026.271 - 16140.744: 81.6106% ( 77) 00:09:32.137 16140.744 - 16255.217: 82.5721% ( 88) 00:09:32.137 16255.217 - 16369.691: 83.3698% ( 73) 00:09:32.137 16369.691 - 16484.164: 84.0800% ( 65) 00:09:32.137 16484.164 - 16598.638: 84.9650% ( 81) 00:09:32.137 16598.638 - 16713.111: 85.6316% ( 61) 00:09:32.137 16713.111 - 16827.584: 86.4729% ( 77) 00:09:32.137 16827.584 - 16942.058: 87.2596% ( 72) 00:09:32.137 16942.058 - 17056.531: 87.7622% ( 46) 00:09:32.137 17056.531 - 17171.004: 88.5490% ( 72) 00:09:32.137 17171.004 - 17285.478: 89.3029% ( 69) 00:09:32.137 17285.478 - 17399.951: 89.8492% ( 50) 00:09:32.137 17399.951 - 17514.424: 90.3955% ( 50) 00:09:32.137 17514.424 - 17628.898: 90.7998% ( 37) 00:09:32.137 17628.898 - 17743.371: 91.0293% ( 21) 00:09:32.137 17743.371 - 17857.845: 91.2697% ( 22) 00:09:32.137 17857.845 - 17972.318: 91.5975% ( 30) 00:09:32.137 17972.318 - 18086.791: 92.0236% ( 39) 00:09:32.137 18086.791 - 18201.265: 92.5044% ( 44) 00:09:32.137 18201.265 - 18315.738: 93.0179% ( 47) 00:09:32.137 18315.738 - 18430.211: 93.6298% ( 56) 00:09:32.137 18430.211 - 18544.685: 94.1980% ( 52) 00:09:32.137 18544.685 - 18659.158: 94.4384% ( 22) 00:09:32.137 18659.158 - 18773.631: 94.6460% ( 19) 00:09:32.137 18773.631 - 18888.105: 94.9519% ( 28) 00:09:32.137 18888.105 - 19002.578: 95.1923% ( 22) 00:09:32.137 19002.578 - 19117.052: 95.3999% ( 19) 00:09:32.137 19117.052 - 19231.525: 95.5420% ( 13) 00:09:32.137 19231.525 - 19345.998: 95.7714% ( 21) 00:09:32.137 19345.998 - 19460.472: 95.9899% ( 20) 00:09:32.137 19460.472 - 19574.945: 96.2959% ( 28) 00:09:32.137 19574.945 - 19689.418: 96.4379% ( 13) 00:09:32.137 19689.418 - 19803.892: 96.5472% ( 10) 00:09:32.137 19803.892 - 19918.365: 96.7111% ( 15) 00:09:32.137 19918.365 - 20032.838: 96.8204% ( 10) 00:09:32.137 20032.838 - 20147.312: 96.9843% ( 15) 00:09:32.137 20147.312 - 20261.785: 97.1372% ( 14) 00:09:32.137 20261.785 - 20376.259: 97.3011% ( 15) 00:09:32.137 20376.259 - 20490.732: 97.5415% ( 22) 00:09:32.137 20490.732 - 20605.205: 97.8038% ( 24) 00:09:32.137 20605.205 - 20719.679: 97.9349% ( 12) 00:09:32.137 20719.679 - 20834.152: 98.0441% ( 10) 00:09:32.137 20834.152 - 20948.625: 98.0660% ( 2) 00:09:32.137 20948.625 - 21063.099: 98.1206% ( 5) 00:09:32.137 21063.099 - 21177.572: 98.1534% ( 3) 00:09:32.137 21177.572 - 21292.045: 98.1753% ( 2) 00:09:32.137 21292.045 - 21406.519: 98.1862% ( 1) 00:09:32.137 21406.519 - 21520.992: 98.2190% ( 3) 00:09:32.137 21520.992 - 21635.466: 98.2736% ( 5) 00:09:32.137 21635.466 - 21749.939: 98.3392% ( 6) 00:09:32.137 21749.939 - 21864.412: 98.3610% ( 2) 00:09:32.138 21864.412 - 21978.886: 98.3938% ( 3) 00:09:32.138 21978.886 - 22093.359: 98.4156% ( 2) 00:09:32.138 22093.359 - 22207.832: 98.4375% ( 2) 00:09:32.138 22207.832 - 22322.306: 98.4594% ( 2) 00:09:32.138 22322.306 - 22436.779: 98.4812% ( 2) 00:09:32.138 22436.779 - 22551.252: 98.5031% ( 2) 00:09:32.138 22551.252 - 22665.726: 98.5249% ( 2) 00:09:32.138 22665.726 - 22780.199: 98.5577% ( 3) 00:09:32.138 22780.199 - 22894.672: 98.5686% ( 1) 00:09:32.138 22894.672 - 23009.146: 98.5905% ( 2) 00:09:32.138 23009.146 - 23123.619: 98.6014% ( 1) 00:09:32.138 26099.927 - 26214.400: 98.6123% ( 1) 00:09:32.138 26214.400 - 26328.873: 98.6342% ( 2) 00:09:32.138 26328.873 - 26443.347: 98.6670% ( 3) 00:09:32.138 26443.347 - 26557.820: 98.6997% ( 3) 00:09:32.138 26557.820 - 26672.293: 98.7325% ( 3) 00:09:32.138 26672.293 - 26786.767: 98.7544% ( 2) 00:09:32.138 26786.767 - 26901.240: 98.7872% ( 3) 00:09:32.138 26901.240 - 27015.714: 98.8199% ( 3) 00:09:32.138 27015.714 - 27130.187: 98.8418% ( 2) 00:09:32.138 27130.187 - 27244.660: 98.8746% ( 3) 00:09:32.138 27244.660 - 27359.134: 98.9073% ( 3) 00:09:32.138 27359.134 - 27473.607: 98.9401% ( 3) 00:09:32.138 27473.607 - 27588.080: 98.9729% ( 3) 00:09:32.138 27588.080 - 27702.554: 98.9948% ( 2) 00:09:32.138 27702.554 - 27817.027: 99.0275% ( 3) 00:09:32.138 27817.027 - 27931.500: 99.0603% ( 3) 00:09:32.138 27931.500 - 28045.974: 99.0931% ( 3) 00:09:32.138 28045.974 - 28160.447: 99.1149% ( 2) 00:09:32.138 28160.447 - 28274.921: 99.1477% ( 3) 00:09:32.138 28274.921 - 28389.394: 99.1805% ( 3) 00:09:32.138 28389.394 - 28503.867: 99.2133% ( 3) 00:09:32.138 28503.867 - 28618.341: 99.2461% ( 3) 00:09:32.138 28618.341 - 28732.814: 99.2679% ( 2) 00:09:32.138 28732.814 - 28847.287: 99.3007% ( 3) 00:09:32.138 38005.156 - 38234.103: 99.3444% ( 4) 00:09:32.138 38234.103 - 38463.050: 99.3990% ( 5) 00:09:32.138 38463.050 - 38691.997: 99.4755% ( 7) 00:09:32.138 38691.997 - 38920.943: 99.5302% ( 5) 00:09:32.138 38920.943 - 39149.890: 99.5957% ( 6) 00:09:32.138 39149.890 - 39378.837: 99.6394% ( 4) 00:09:32.138 39378.837 - 39607.783: 99.7050% ( 6) 00:09:32.138 39607.783 - 39836.730: 99.7705% ( 6) 00:09:32.138 39836.730 - 40065.677: 99.8361% ( 6) 00:09:32.138 40065.677 - 40294.624: 99.9017% ( 6) 00:09:32.138 40294.624 - 40523.570: 99.9672% ( 6) 00:09:32.138 40523.570 - 40752.517: 100.0000% ( 3) 00:09:32.138 00:09:32.138 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:32.138 ============================================================================== 00:09:32.138 Range in us Cumulative IO count 00:09:32.138 7784.189 - 7841.425: 0.0219% ( 2) 00:09:32.138 7841.425 - 7898.662: 0.0656% ( 4) 00:09:32.138 7898.662 - 7955.899: 0.0983% ( 3) 00:09:32.138 7955.899 - 8013.135: 0.1530% ( 5) 00:09:32.138 8013.135 - 8070.372: 0.2185% ( 6) 00:09:32.138 8070.372 - 8127.609: 0.3169% ( 9) 00:09:32.138 8127.609 - 8184.845: 0.4917% ( 16) 00:09:32.138 8184.845 - 8242.082: 0.5791% ( 8) 00:09:32.138 8242.082 - 8299.319: 0.6556% ( 7) 00:09:32.138 8299.319 - 8356.555: 0.6993% ( 4) 00:09:32.138 9672.999 - 9730.236: 0.7102% ( 1) 00:09:32.138 9730.236 - 9787.472: 0.7212% ( 1) 00:09:32.138 9787.472 - 9844.709: 0.7539% ( 3) 00:09:32.138 9844.709 - 9901.946: 0.8195% ( 6) 00:09:32.138 9901.946 - 9959.183: 0.8523% ( 3) 00:09:32.138 9959.183 - 10016.419: 0.9069% ( 5) 00:09:32.138 10016.419 - 10073.656: 1.0708% ( 15) 00:09:32.138 10073.656 - 10130.893: 1.4642% ( 36) 00:09:32.138 10130.893 - 10188.129: 1.7920% ( 30) 00:09:32.138 10188.129 - 10245.366: 2.0760% ( 26) 00:09:32.138 10245.366 - 10302.603: 2.3711% ( 27) 00:09:32.138 10302.603 - 10359.839: 2.9283% ( 51) 00:09:32.138 10359.839 - 10417.076: 3.3763% ( 41) 00:09:32.138 10417.076 - 10474.313: 3.8899% ( 47) 00:09:32.138 10474.313 - 10531.549: 4.5892% ( 64) 00:09:32.138 10531.549 - 10588.786: 5.2994% ( 65) 00:09:32.138 10588.786 - 10646.023: 5.9331% ( 58) 00:09:32.138 10646.023 - 10703.259: 6.7635% ( 76) 00:09:32.138 10703.259 - 10760.496: 7.4519% ( 63) 00:09:32.138 10760.496 - 10817.733: 8.3698% ( 84) 00:09:32.138 10817.733 - 10874.969: 9.3094% ( 86) 00:09:32.138 10874.969 - 10932.206: 10.3802% ( 98) 00:09:32.138 10932.206 - 10989.443: 11.4401% ( 97) 00:09:32.138 10989.443 - 11046.679: 12.7185% ( 117) 00:09:32.138 11046.679 - 11103.916: 14.2155% ( 137) 00:09:32.138 11103.916 - 11161.153: 16.2260% ( 184) 00:09:32.138 11161.153 - 11218.390: 17.8868% ( 152) 00:09:32.138 11218.390 - 11275.626: 19.4056% ( 139) 00:09:32.138 11275.626 - 11332.863: 21.0555% ( 151) 00:09:32.138 11332.863 - 11390.100: 22.6399% ( 145) 00:09:32.138 11390.100 - 11447.336: 23.7872% ( 105) 00:09:32.138 11447.336 - 11504.573: 25.0109% ( 112) 00:09:32.138 11504.573 - 11561.810: 25.9397% ( 85) 00:09:32.138 11561.810 - 11619.046: 26.9340% ( 91) 00:09:32.138 11619.046 - 11676.283: 28.0267% ( 100) 00:09:32.138 11676.283 - 11733.520: 28.7260% ( 64) 00:09:32.138 11733.520 - 11790.756: 29.2614% ( 49) 00:09:32.138 11790.756 - 11847.993: 29.7531% ( 45) 00:09:32.138 11847.993 - 11905.230: 30.3322% ( 53) 00:09:32.138 11905.230 - 11962.466: 30.8676% ( 49) 00:09:32.138 11962.466 - 12019.703: 31.5341% ( 61) 00:09:32.138 12019.703 - 12076.940: 32.0913% ( 51) 00:09:32.138 12076.940 - 12134.176: 32.4738% ( 35) 00:09:32.138 12134.176 - 12191.413: 33.1949% ( 66) 00:09:32.138 12191.413 - 12248.650: 33.6538% ( 42) 00:09:32.138 12248.650 - 12305.886: 34.1783% ( 48) 00:09:32.138 12305.886 - 12363.123: 34.8558% ( 62) 00:09:32.138 12363.123 - 12420.360: 35.2491% ( 36) 00:09:32.138 12420.360 - 12477.597: 35.5114% ( 24) 00:09:32.138 12477.597 - 12534.833: 35.7955% ( 26) 00:09:32.138 12534.833 - 12592.070: 36.2544% ( 42) 00:09:32.138 12592.070 - 12649.307: 36.7679% ( 47) 00:09:32.138 12649.307 - 12706.543: 37.5000% ( 67) 00:09:32.138 12706.543 - 12763.780: 38.0682% ( 52) 00:09:32.138 12763.780 - 12821.017: 38.5052% ( 40) 00:09:32.138 12821.017 - 12878.253: 39.0953% ( 54) 00:09:32.138 12878.253 - 12935.490: 39.8929% ( 73) 00:09:32.138 12935.490 - 12992.727: 40.4065% ( 47) 00:09:32.138 12992.727 - 13049.963: 40.8545% ( 41) 00:09:32.138 13049.963 - 13107.200: 41.4336% ( 53) 00:09:32.138 13107.200 - 13164.437: 42.1875% ( 69) 00:09:32.138 13164.437 - 13221.673: 42.7994% ( 56) 00:09:32.138 13221.673 - 13278.910: 43.5205% ( 66) 00:09:32.138 13278.910 - 13336.147: 44.1761% ( 60) 00:09:32.138 13336.147 - 13393.383: 44.8427% ( 61) 00:09:32.138 13393.383 - 13450.620: 45.4436% ( 55) 00:09:32.138 13450.620 - 13507.857: 46.1320% ( 63) 00:09:32.138 13507.857 - 13565.093: 46.7657% ( 58) 00:09:32.138 13565.093 - 13622.330: 47.9458% ( 108) 00:09:32.138 13622.330 - 13679.567: 48.8090% ( 79) 00:09:32.138 13679.567 - 13736.803: 50.2622% ( 133) 00:09:32.138 13736.803 - 13794.040: 51.5079% ( 114) 00:09:32.138 13794.040 - 13851.277: 52.6552% ( 105) 00:09:32.138 13851.277 - 13908.514: 53.7369% ( 99) 00:09:32.138 13908.514 - 13965.750: 54.7312% ( 91) 00:09:32.138 13965.750 - 14022.987: 55.7255% ( 91) 00:09:32.138 14022.987 - 14080.224: 56.6871% ( 88) 00:09:32.138 14080.224 - 14137.460: 57.6267% ( 86) 00:09:32.138 14137.460 - 14194.697: 58.5664% ( 86) 00:09:32.138 14194.697 - 14251.934: 59.3641% ( 73) 00:09:32.138 14251.934 - 14309.170: 60.4021% ( 95) 00:09:32.138 14309.170 - 14366.407: 62.2487% ( 169) 00:09:32.138 14366.407 - 14423.644: 63.5162% ( 116) 00:09:32.138 14423.644 - 14480.880: 64.7290% ( 111) 00:09:32.138 14480.880 - 14538.117: 65.8108% ( 99) 00:09:32.138 14538.117 - 14595.354: 66.7941% ( 90) 00:09:32.138 14595.354 - 14652.590: 67.8322% ( 95) 00:09:32.138 14652.590 - 14767.064: 68.9467% ( 102) 00:09:32.138 14767.064 - 14881.537: 70.4873% ( 141) 00:09:32.138 14881.537 - 14996.010: 71.5253% ( 95) 00:09:32.138 14996.010 - 15110.484: 72.2247% ( 64) 00:09:32.138 15110.484 - 15224.957: 73.0660% ( 77) 00:09:32.138 15224.957 - 15339.431: 74.1477% ( 99) 00:09:32.138 15339.431 - 15453.904: 75.1420% ( 91) 00:09:32.138 15453.904 - 15568.377: 76.0162% ( 80) 00:09:32.138 15568.377 - 15682.851: 77.3492% ( 122) 00:09:32.138 15682.851 - 15797.324: 78.5402% ( 109) 00:09:32.138 15797.324 - 15911.797: 79.8186% ( 117) 00:09:32.138 15911.797 - 16026.271: 81.1954% ( 126) 00:09:32.138 16026.271 - 16140.744: 82.0258% ( 76) 00:09:32.138 16140.744 - 16255.217: 82.8344% ( 74) 00:09:32.138 16255.217 - 16369.691: 83.4790% ( 59) 00:09:32.138 16369.691 - 16484.164: 84.1018% ( 57) 00:09:32.138 16484.164 - 16598.638: 84.8121% ( 65) 00:09:32.138 16598.638 - 16713.111: 85.7299% ( 84) 00:09:32.138 16713.111 - 16827.584: 86.6149% ( 81) 00:09:32.138 16827.584 - 16942.058: 87.1613% ( 50) 00:09:32.138 16942.058 - 17056.531: 87.8387% ( 62) 00:09:32.138 17056.531 - 17171.004: 88.7238% ( 81) 00:09:32.138 17171.004 - 17285.478: 89.6307% ( 83) 00:09:32.138 17285.478 - 17399.951: 90.2753% ( 59) 00:09:32.138 17399.951 - 17514.424: 90.6250% ( 32) 00:09:32.138 17514.424 - 17628.898: 90.9419% ( 29) 00:09:32.138 17628.898 - 17743.371: 91.3352% ( 36) 00:09:32.138 17743.371 - 17857.845: 91.7723% ( 40) 00:09:32.138 17857.845 - 17972.318: 92.3733% ( 55) 00:09:32.138 17972.318 - 18086.791: 92.9633% ( 54) 00:09:32.138 18086.791 - 18201.265: 93.3348% ( 34) 00:09:32.138 18201.265 - 18315.738: 93.7281% ( 36) 00:09:32.138 18315.738 - 18430.211: 94.0013% ( 25) 00:09:32.138 18430.211 - 18544.685: 94.3400% ( 31) 00:09:32.138 18544.685 - 18659.158: 94.7771% ( 40) 00:09:32.138 18659.158 - 18773.631: 94.9847% ( 19) 00:09:32.138 18773.631 - 18888.105: 95.1267% ( 13) 00:09:32.138 18888.105 - 19002.578: 95.2797% ( 14) 00:09:32.138 19002.578 - 19117.052: 95.4436% ( 15) 00:09:32.138 19117.052 - 19231.525: 95.6731% ( 21) 00:09:32.138 19231.525 - 19345.998: 95.8479% ( 16) 00:09:32.138 19345.998 - 19460.472: 96.0227% ( 16) 00:09:32.138 19460.472 - 19574.945: 96.3068% ( 26) 00:09:32.138 19574.945 - 19689.418: 96.5800% ( 25) 00:09:32.138 19689.418 - 19803.892: 96.7220% ( 13) 00:09:32.138 19803.892 - 19918.365: 96.8641% ( 13) 00:09:32.138 19918.365 - 20032.838: 96.9624% ( 9) 00:09:32.138 20032.838 - 20147.312: 97.0498% ( 8) 00:09:32.138 20147.312 - 20261.785: 97.1154% ( 6) 00:09:32.138 20261.785 - 20376.259: 97.1919% ( 7) 00:09:32.138 20376.259 - 20490.732: 97.3121% ( 11) 00:09:32.138 20490.732 - 20605.205: 97.4213% ( 10) 00:09:32.138 20605.205 - 20719.679: 97.5743% ( 14) 00:09:32.138 20719.679 - 20834.152: 97.6945% ( 11) 00:09:32.138 20834.152 - 20948.625: 97.8584% ( 15) 00:09:32.138 20948.625 - 21063.099: 98.0114% ( 14) 00:09:32.138 21063.099 - 21177.572: 98.1534% ( 13) 00:09:32.138 21177.572 - 21292.045: 98.2736% ( 11) 00:09:32.138 21292.045 - 21406.519: 98.3501% ( 7) 00:09:32.138 21406.519 - 21520.992: 98.3938% ( 4) 00:09:32.138 21520.992 - 21635.466: 98.4375% ( 4) 00:09:32.138 21635.466 - 21749.939: 98.4703% ( 3) 00:09:32.138 21749.939 - 21864.412: 98.4921% ( 2) 00:09:32.138 21864.412 - 21978.886: 98.5140% ( 2) 00:09:32.138 21978.886 - 22093.359: 98.5358% ( 2) 00:09:32.138 22093.359 - 22207.832: 98.5686% ( 3) 00:09:32.138 22207.832 - 22322.306: 98.5905% ( 2) 00:09:32.138 22322.306 - 22436.779: 98.6014% ( 1) 00:09:32.138 27359.134 - 27473.607: 98.6123% ( 1) 00:09:32.138 27473.607 - 27588.080: 98.6451% ( 3) 00:09:32.138 27588.080 - 27702.554: 98.6779% ( 3) 00:09:32.138 27702.554 - 27817.027: 98.7216% ( 4) 00:09:32.138 27817.027 - 27931.500: 98.7544% ( 3) 00:09:32.138 27931.500 - 28045.974: 98.7872% ( 3) 00:09:32.138 28045.974 - 28160.447: 98.8199% ( 3) 00:09:32.138 28160.447 - 28274.921: 98.8527% ( 3) 00:09:32.138 28274.921 - 28389.394: 98.8964% ( 4) 00:09:32.138 28389.394 - 28503.867: 98.9292% ( 3) 00:09:32.138 28503.867 - 28618.341: 98.9620% ( 3) 00:09:32.138 28618.341 - 28732.814: 98.9948% ( 3) 00:09:32.138 28732.814 - 28847.287: 99.0275% ( 3) 00:09:32.138 28847.287 - 28961.761: 99.0712% ( 4) 00:09:32.138 28961.761 - 29076.234: 99.1149% ( 4) 00:09:32.138 29076.234 - 29190.707: 99.1477% ( 3) 00:09:32.138 29190.707 - 29305.181: 99.1914% ( 4) 00:09:32.138 29305.181 - 29534.128: 99.2570% ( 6) 00:09:32.138 29534.128 - 29763.074: 99.3007% ( 4) 00:09:32.138 38463.050 - 38691.997: 99.3226% ( 2) 00:09:32.138 38691.997 - 38920.943: 99.3990% ( 7) 00:09:32.138 38920.943 - 39149.890: 99.4755% ( 7) 00:09:32.138 39149.890 - 39378.837: 99.5520% ( 7) 00:09:32.138 39378.837 - 39607.783: 99.6176% ( 6) 00:09:32.138 39607.783 - 39836.730: 99.6831% ( 6) 00:09:32.138 39836.730 - 40065.677: 99.7378% ( 5) 00:09:32.138 40065.677 - 40294.624: 99.8142% ( 7) 00:09:32.138 40294.624 - 40523.570: 99.8798% ( 6) 00:09:32.138 40523.570 - 40752.517: 99.9454% ( 6) 00:09:32.138 40752.517 - 40981.464: 100.0000% ( 5) 00:09:32.138 00:09:32.138 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:32.138 ============================================================================== 00:09:32.138 Range in us Cumulative IO count 00:09:32.138 6353.272 - 6381.890: 0.0109% ( 1) 00:09:32.138 6381.890 - 6410.508: 0.0437% ( 3) 00:09:32.138 6410.508 - 6439.127: 0.0765% ( 3) 00:09:32.138 6439.127 - 6467.745: 0.0983% ( 2) 00:09:32.138 6467.745 - 6496.363: 0.1311% ( 3) 00:09:32.138 6496.363 - 6524.982: 0.1530% ( 2) 00:09:32.138 6524.982 - 6553.600: 0.1748% ( 2) 00:09:32.138 6553.600 - 6582.218: 0.1858% ( 1) 00:09:32.138 6582.218 - 6610.837: 0.2076% ( 2) 00:09:32.138 6610.837 - 6639.455: 0.2404% ( 3) 00:09:32.138 6639.455 - 6668.073: 0.2513% ( 1) 00:09:32.138 6668.073 - 6696.692: 0.3169% ( 6) 00:09:32.138 6696.692 - 6725.310: 0.3606% ( 4) 00:09:32.138 6725.310 - 6753.928: 0.3824% ( 2) 00:09:32.138 6753.928 - 6782.547: 0.3934% ( 1) 00:09:32.138 6782.547 - 6811.165: 0.4152% ( 2) 00:09:32.138 6811.165 - 6839.783: 0.4371% ( 2) 00:09:32.138 6839.783 - 6868.402: 0.4589% ( 2) 00:09:32.138 6868.402 - 6897.020: 0.4808% ( 2) 00:09:32.138 6897.020 - 6925.638: 0.5026% ( 2) 00:09:32.138 6925.638 - 6954.257: 0.5245% ( 2) 00:09:32.138 6954.257 - 6982.875: 0.5573% ( 3) 00:09:32.138 6982.875 - 7011.493: 0.5682% ( 1) 00:09:32.138 7011.493 - 7040.112: 0.5900% ( 2) 00:09:32.138 7040.112 - 7068.730: 0.6119% ( 2) 00:09:32.138 7068.730 - 7097.348: 0.6337% ( 2) 00:09:32.138 7097.348 - 7125.967: 0.6556% ( 2) 00:09:32.138 7125.967 - 7154.585: 0.6774% ( 2) 00:09:32.138 7154.585 - 7183.203: 0.6993% ( 2) 00:09:32.138 9501.289 - 9558.526: 0.7102% ( 1) 00:09:32.138 9558.526 - 9615.762: 0.7321% ( 2) 00:09:32.138 9615.762 - 9672.999: 0.7976% ( 6) 00:09:32.138 9672.999 - 9730.236: 0.8632% ( 6) 00:09:32.138 9730.236 - 9787.472: 0.9397% ( 7) 00:09:32.138 9787.472 - 9844.709: 1.1036% ( 15) 00:09:32.138 9844.709 - 9901.946: 1.3549% ( 23) 00:09:32.138 9901.946 - 9959.183: 1.5734% ( 20) 00:09:32.138 9959.183 - 10016.419: 1.9559% ( 35) 00:09:32.138 10016.419 - 10073.656: 2.3164% ( 33) 00:09:32.138 10073.656 - 10130.893: 2.5677% ( 23) 00:09:32.138 10130.893 - 10188.129: 3.0157% ( 41) 00:09:32.138 10188.129 - 10245.366: 3.5074% ( 45) 00:09:32.138 10245.366 - 10302.603: 3.7041% ( 18) 00:09:32.138 10302.603 - 10359.839: 3.9226% ( 20) 00:09:32.138 10359.839 - 10417.076: 4.1630% ( 22) 00:09:32.138 10417.076 - 10474.313: 4.5236% ( 33) 00:09:32.138 10474.313 - 10531.549: 5.0262% ( 46) 00:09:32.138 10531.549 - 10588.786: 5.8348% ( 74) 00:09:32.138 10588.786 - 10646.023: 6.7308% ( 82) 00:09:32.138 10646.023 - 10703.259: 7.7469% ( 93) 00:09:32.138 10703.259 - 10760.496: 8.7740% ( 94) 00:09:32.138 10760.496 - 10817.733: 9.6809% ( 83) 00:09:32.138 10817.733 - 10874.969: 10.5223% ( 77) 00:09:32.138 10874.969 - 10932.206: 11.5166% ( 91) 00:09:32.138 10932.206 - 10989.443: 12.6093% ( 100) 00:09:32.138 10989.443 - 11046.679: 13.7675% ( 106) 00:09:32.138 11046.679 - 11103.916: 14.7946% ( 94) 00:09:32.138 11103.916 - 11161.153: 16.4773% ( 154) 00:09:32.138 11161.153 - 11218.390: 17.9633% ( 136) 00:09:32.138 11218.390 - 11275.626: 19.2963% ( 122) 00:09:32.138 11275.626 - 11332.863: 20.9899% ( 155) 00:09:32.138 11332.863 - 11390.100: 22.4978% ( 138) 00:09:32.138 11390.100 - 11447.336: 23.7325% ( 113) 00:09:32.138 11447.336 - 11504.573: 25.0874% ( 124) 00:09:32.138 11504.573 - 11561.810: 26.2675% ( 108) 00:09:32.138 11561.810 - 11619.046: 27.0323% ( 70) 00:09:32.138 11619.046 - 11676.283: 27.6989% ( 61) 00:09:32.138 11676.283 - 11733.520: 28.2233% ( 48) 00:09:32.138 11733.520 - 11790.756: 28.7041% ( 44) 00:09:32.138 11790.756 - 11847.993: 29.1412% ( 40) 00:09:32.138 11847.993 - 11905.230: 29.4908% ( 32) 00:09:32.138 11905.230 - 11962.466: 29.8186% ( 30) 00:09:32.138 11962.466 - 12019.703: 30.3540% ( 49) 00:09:32.138 12019.703 - 12076.940: 30.8129% ( 42) 00:09:32.138 12076.940 - 12134.176: 31.2609% ( 41) 00:09:32.138 12134.176 - 12191.413: 31.7198% ( 42) 00:09:32.138 12191.413 - 12248.650: 32.3973% ( 62) 00:09:32.138 12248.650 - 12305.886: 32.8562% ( 42) 00:09:32.138 12305.886 - 12363.123: 33.3260% ( 43) 00:09:32.138 12363.123 - 12420.360: 34.0363% ( 65) 00:09:32.138 12420.360 - 12477.597: 34.6591% ( 57) 00:09:32.138 12477.597 - 12534.833: 35.4677% ( 74) 00:09:32.138 12534.833 - 12592.070: 36.4948% ( 94) 00:09:32.138 12592.070 - 12649.307: 37.1176% ( 57) 00:09:32.138 12649.307 - 12706.543: 37.9589% ( 77) 00:09:32.138 12706.543 - 12763.780: 39.0406% ( 99) 00:09:32.138 12763.780 - 12821.017: 39.7072% ( 61) 00:09:32.138 12821.017 - 12878.253: 40.4939% ( 72) 00:09:32.138 12878.253 - 12935.490: 41.4226% ( 85) 00:09:32.138 12935.490 - 12992.727: 42.2421% ( 75) 00:09:32.138 12992.727 - 13049.963: 42.8322% ( 54) 00:09:32.138 13049.963 - 13107.200: 43.6626% ( 76) 00:09:32.138 13107.200 - 13164.437: 44.3182% ( 60) 00:09:32.138 13164.437 - 13221.673: 45.0175% ( 64) 00:09:32.138 13221.673 - 13278.910: 45.4873% ( 43) 00:09:32.138 13278.910 - 13336.147: 45.9462% ( 42) 00:09:32.138 13336.147 - 13393.383: 46.4816% ( 49) 00:09:32.138 13393.383 - 13450.620: 46.9296% ( 41) 00:09:32.138 13450.620 - 13507.857: 47.7382% ( 74) 00:09:32.138 13507.857 - 13565.093: 48.3829% ( 59) 00:09:32.138 13565.093 - 13622.330: 48.9401% ( 51) 00:09:32.138 13622.330 - 13679.567: 49.4427% ( 46) 00:09:32.138 13679.567 - 13736.803: 50.0546% ( 56) 00:09:32.138 13736.803 - 13794.040: 51.0380% ( 90) 00:09:32.138 13794.040 - 13851.277: 52.3383% ( 119) 00:09:32.138 13851.277 - 13908.514: 53.3108% ( 89) 00:09:32.138 13908.514 - 13965.750: 54.2286% ( 84) 00:09:32.138 13965.750 - 14022.987: 55.5288% ( 119) 00:09:32.138 14022.987 - 14080.224: 56.5450% ( 93) 00:09:32.138 14080.224 - 14137.460: 57.7251% ( 108) 00:09:32.138 14137.460 - 14194.697: 58.7631% ( 95) 00:09:32.138 14194.697 - 14251.934: 59.9104% ( 105) 00:09:32.138 14251.934 - 14309.170: 60.9484% ( 95) 00:09:32.138 14309.170 - 14366.407: 61.8007% ( 78) 00:09:32.138 14366.407 - 14423.644: 62.6748% ( 80) 00:09:32.138 14423.644 - 14480.880: 63.7347% ( 97) 00:09:32.138 14480.880 - 14538.117: 64.5105% ( 71) 00:09:32.138 14538.117 - 14595.354: 65.2535% ( 68) 00:09:32.138 14595.354 - 14652.590: 66.0621% ( 74) 00:09:32.138 14652.590 - 14767.064: 67.9851% ( 176) 00:09:32.139 14767.064 - 14881.537: 69.4056% ( 130) 00:09:32.139 14881.537 - 14996.010: 70.6949% ( 118) 00:09:32.139 14996.010 - 15110.484: 72.0826% ( 127) 00:09:32.139 15110.484 - 15224.957: 73.4266% ( 123) 00:09:32.139 15224.957 - 15339.431: 75.2404% ( 166) 00:09:32.139 15339.431 - 15453.904: 76.3440% ( 101) 00:09:32.139 15453.904 - 15568.377: 77.3383% ( 91) 00:09:32.139 15568.377 - 15682.851: 78.2233% ( 81) 00:09:32.139 15682.851 - 15797.324: 79.2286% ( 92) 00:09:32.139 15797.324 - 15911.797: 80.5507% ( 121) 00:09:32.139 15911.797 - 16026.271: 81.6543% ( 101) 00:09:32.139 16026.271 - 16140.744: 82.6049% ( 87) 00:09:32.139 16140.744 - 16255.217: 83.2933% ( 63) 00:09:32.139 16255.217 - 16369.691: 83.7085% ( 38) 00:09:32.139 16369.691 - 16484.164: 84.1128% ( 37) 00:09:32.139 16484.164 - 16598.638: 84.5280% ( 38) 00:09:32.139 16598.638 - 16713.111: 84.8885% ( 33) 00:09:32.139 16713.111 - 16827.584: 85.4895% ( 55) 00:09:32.139 16827.584 - 16942.058: 86.2216% ( 67) 00:09:32.139 16942.058 - 17056.531: 86.7024% ( 44) 00:09:32.139 17056.531 - 17171.004: 87.2596% ( 51) 00:09:32.139 17171.004 - 17285.478: 88.0026% ( 68) 00:09:32.139 17285.478 - 17399.951: 88.7456% ( 68) 00:09:32.139 17399.951 - 17514.424: 89.6962% ( 87) 00:09:32.139 17514.424 - 17628.898: 90.6906% ( 91) 00:09:32.139 17628.898 - 17743.371: 91.4663% ( 71) 00:09:32.139 17743.371 - 17857.845: 92.0127% ( 50) 00:09:32.139 17857.845 - 17972.318: 92.3514% ( 31) 00:09:32.139 17972.318 - 18086.791: 92.7994% ( 41) 00:09:32.139 18086.791 - 18201.265: 93.4331% ( 58) 00:09:32.139 18201.265 - 18315.738: 94.0450% ( 56) 00:09:32.139 18315.738 - 18430.211: 94.4493% ( 37) 00:09:32.139 18430.211 - 18544.685: 94.8317% ( 35) 00:09:32.139 18544.685 - 18659.158: 95.1377% ( 28) 00:09:32.139 18659.158 - 18773.631: 95.2469% ( 10) 00:09:32.139 18773.631 - 18888.105: 95.3562% ( 10) 00:09:32.139 18888.105 - 19002.578: 95.4218% ( 6) 00:09:32.139 19002.578 - 19117.052: 95.5420% ( 11) 00:09:32.139 19117.052 - 19231.525: 95.7496% ( 19) 00:09:32.139 19231.525 - 19345.998: 95.9244% ( 16) 00:09:32.139 19345.998 - 19460.472: 96.1101% ( 17) 00:09:32.139 19460.472 - 19574.945: 96.2959% ( 17) 00:09:32.139 19574.945 - 19689.418: 96.5144% ( 20) 00:09:32.139 19689.418 - 19803.892: 96.7985% ( 26) 00:09:32.139 19803.892 - 19918.365: 96.9406% ( 13) 00:09:32.139 19918.365 - 20032.838: 97.0717% ( 12) 00:09:32.139 20032.838 - 20147.312: 97.2028% ( 12) 00:09:32.139 20147.312 - 20261.785: 97.2902% ( 8) 00:09:32.139 20261.785 - 20376.259: 97.3448% ( 5) 00:09:32.139 20376.259 - 20490.732: 97.4323% ( 8) 00:09:32.139 20490.732 - 20605.205: 97.6071% ( 16) 00:09:32.139 20605.205 - 20719.679: 97.6836% ( 7) 00:09:32.139 20719.679 - 20834.152: 97.7491% ( 6) 00:09:32.139 20834.152 - 20948.625: 97.8256% ( 7) 00:09:32.139 20948.625 - 21063.099: 97.9021% ( 7) 00:09:32.139 21063.099 - 21177.572: 97.9786% ( 7) 00:09:32.139 21177.572 - 21292.045: 98.0551% ( 7) 00:09:32.139 21292.045 - 21406.519: 98.2408% ( 17) 00:09:32.139 21406.519 - 21520.992: 98.4812% ( 22) 00:09:32.139 21520.992 - 21635.466: 98.5468% ( 6) 00:09:32.139 21635.466 - 21749.939: 98.5795% ( 3) 00:09:32.139 21749.939 - 21864.412: 98.6014% ( 2) 00:09:32.139 28160.447 - 28274.921: 98.6123% ( 1) 00:09:32.139 28274.921 - 28389.394: 98.6451% ( 3) 00:09:32.139 28389.394 - 28503.867: 98.6888% ( 4) 00:09:32.139 28503.867 - 28618.341: 98.7216% ( 3) 00:09:32.139 28618.341 - 28732.814: 98.7544% ( 3) 00:09:32.139 28732.814 - 28847.287: 98.7872% ( 3) 00:09:32.139 28847.287 - 28961.761: 98.8199% ( 3) 00:09:32.139 28961.761 - 29076.234: 98.8527% ( 3) 00:09:32.139 29076.234 - 29190.707: 98.8964% ( 4) 00:09:32.139 29190.707 - 29305.181: 98.9292% ( 3) 00:09:32.139 29305.181 - 29534.128: 99.0057% ( 7) 00:09:32.139 29534.128 - 29763.074: 99.0822% ( 7) 00:09:32.139 29763.074 - 29992.021: 99.1477% ( 6) 00:09:32.139 29992.021 - 30220.968: 99.2133% ( 6) 00:09:32.139 30220.968 - 30449.914: 99.2898% ( 7) 00:09:32.139 30449.914 - 30678.861: 99.3007% ( 1) 00:09:32.139 39149.890 - 39378.837: 99.3226% ( 2) 00:09:32.139 39378.837 - 39607.783: 99.3881% ( 6) 00:09:32.139 39607.783 - 39836.730: 99.4537% ( 6) 00:09:32.139 39836.730 - 40065.677: 99.5302% ( 7) 00:09:32.139 40065.677 - 40294.624: 99.6066% ( 7) 00:09:32.139 40294.624 - 40523.570: 99.6722% ( 6) 00:09:32.139 40523.570 - 40752.517: 99.7378% ( 6) 00:09:32.139 40752.517 - 40981.464: 99.8142% ( 7) 00:09:32.139 40981.464 - 41210.410: 99.8907% ( 7) 00:09:32.139 41210.410 - 41439.357: 99.9672% ( 7) 00:09:32.139 41439.357 - 41668.304: 100.0000% ( 3) 00:09:32.139 00:09:32.139 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:32.139 ============================================================================== 00:09:32.139 Range in us Cumulative IO count 00:09:32.139 6296.035 - 6324.653: 0.0109% ( 1) 00:09:32.139 6324.653 - 6353.272: 0.0328% ( 2) 00:09:32.139 6353.272 - 6381.890: 0.0546% ( 2) 00:09:32.139 6381.890 - 6410.508: 0.0765% ( 2) 00:09:32.139 6410.508 - 6439.127: 0.0983% ( 2) 00:09:32.139 6439.127 - 6467.745: 0.1311% ( 3) 00:09:32.139 6467.745 - 6496.363: 0.1858% ( 5) 00:09:32.139 6496.363 - 6524.982: 0.2404% ( 5) 00:09:32.139 6524.982 - 6553.600: 0.2950% ( 5) 00:09:32.139 6553.600 - 6582.218: 0.4480% ( 14) 00:09:32.139 6582.218 - 6610.837: 0.4917% ( 4) 00:09:32.139 6610.837 - 6639.455: 0.5354% ( 4) 00:09:32.139 6639.455 - 6668.073: 0.5682% ( 3) 00:09:32.139 6668.073 - 6696.692: 0.6119% ( 4) 00:09:32.139 6696.692 - 6725.310: 0.6447% ( 3) 00:09:32.139 6725.310 - 6753.928: 0.6665% ( 2) 00:09:32.139 6753.928 - 6782.547: 0.6774% ( 1) 00:09:32.139 6782.547 - 6811.165: 0.6993% ( 2) 00:09:32.139 9444.052 - 9501.289: 0.7321% ( 3) 00:09:32.139 9501.289 - 9558.526: 0.7867% ( 5) 00:09:32.139 9558.526 - 9615.762: 0.8413% ( 5) 00:09:32.139 9615.762 - 9672.999: 0.9178% ( 7) 00:09:32.139 9672.999 - 9730.236: 1.0380% ( 11) 00:09:32.139 9730.236 - 9787.472: 1.1582% ( 11) 00:09:32.139 9787.472 - 9844.709: 1.1910% ( 3) 00:09:32.139 9844.709 - 9901.946: 1.2784% ( 8) 00:09:32.139 9901.946 - 9959.183: 1.3986% ( 11) 00:09:32.139 9959.183 - 10016.419: 1.5734% ( 16) 00:09:32.139 10016.419 - 10073.656: 1.8575% ( 26) 00:09:32.139 10073.656 - 10130.893: 2.1853% ( 30) 00:09:32.139 10130.893 - 10188.129: 2.6661% ( 44) 00:09:32.139 10188.129 - 10245.366: 3.0485% ( 35) 00:09:32.139 10245.366 - 10302.603: 3.4965% ( 41) 00:09:32.139 10302.603 - 10359.839: 4.1630% ( 61) 00:09:32.139 10359.839 - 10417.076: 4.6766% ( 47) 00:09:32.139 10417.076 - 10474.313: 5.0481% ( 34) 00:09:32.139 10474.313 - 10531.549: 5.6490% ( 55) 00:09:32.139 10531.549 - 10588.786: 6.1844% ( 49) 00:09:32.139 10588.786 - 10646.023: 6.7308% ( 50) 00:09:32.139 10646.023 - 10703.259: 7.3536% ( 57) 00:09:32.139 10703.259 - 10760.496: 7.9327% ( 53) 00:09:32.139 10760.496 - 10817.733: 8.8396% ( 83) 00:09:32.139 10817.733 - 10874.969: 9.7574% ( 84) 00:09:32.139 10874.969 - 10932.206: 10.7080% ( 87) 00:09:32.139 10932.206 - 10989.443: 11.8881% ( 108) 00:09:32.139 10989.443 - 11046.679: 13.2867% ( 128) 00:09:32.139 11046.679 - 11103.916: 14.8055% ( 139) 00:09:32.139 11103.916 - 11161.153: 16.6521% ( 169) 00:09:32.139 11161.153 - 11218.390: 18.3785% ( 158) 00:09:32.139 11218.390 - 11275.626: 20.0940% ( 157) 00:09:32.139 11275.626 - 11332.863: 21.6237% ( 140) 00:09:32.139 11332.863 - 11390.100: 22.7054% ( 99) 00:09:32.139 11390.100 - 11447.336: 23.6779% ( 89) 00:09:32.139 11447.336 - 11504.573: 24.4646% ( 72) 00:09:32.139 11504.573 - 11561.810: 25.2076% ( 68) 00:09:32.139 11561.810 - 11619.046: 25.8523% ( 59) 00:09:32.139 11619.046 - 11676.283: 26.5844% ( 67) 00:09:32.139 11676.283 - 11733.520: 27.2072% ( 57) 00:09:32.139 11733.520 - 11790.756: 27.7535% ( 50) 00:09:32.139 11790.756 - 11847.993: 28.4747% ( 66) 00:09:32.139 11847.993 - 11905.230: 29.1084% ( 58) 00:09:32.139 11905.230 - 11962.466: 29.7968% ( 63) 00:09:32.139 11962.466 - 12019.703: 30.3759% ( 53) 00:09:32.139 12019.703 - 12076.940: 30.9768% ( 55) 00:09:32.139 12076.940 - 12134.176: 31.6980% ( 66) 00:09:32.139 12134.176 - 12191.413: 32.3427% ( 59) 00:09:32.139 12191.413 - 12248.650: 33.1184% ( 71) 00:09:32.139 12248.650 - 12305.886: 33.9707% ( 78) 00:09:32.139 12305.886 - 12363.123: 34.7247% ( 69) 00:09:32.139 12363.123 - 12420.360: 35.5551% ( 76) 00:09:32.139 12420.360 - 12477.597: 36.3746% ( 75) 00:09:32.139 12477.597 - 12534.833: 37.0629% ( 63) 00:09:32.139 12534.833 - 12592.070: 37.6311% ( 52) 00:09:32.139 12592.070 - 12649.307: 38.1665% ( 49) 00:09:32.139 12649.307 - 12706.543: 38.6910% ( 48) 00:09:32.139 12706.543 - 12763.780: 39.1936% ( 46) 00:09:32.139 12763.780 - 12821.017: 39.7727% ( 53) 00:09:32.139 12821.017 - 12878.253: 40.4939% ( 66) 00:09:32.139 12878.253 - 12935.490: 40.9528% ( 42) 00:09:32.139 12935.490 - 12992.727: 41.5210% ( 52) 00:09:32.139 12992.727 - 13049.963: 42.0127% ( 45) 00:09:32.139 13049.963 - 13107.200: 42.4716% ( 42) 00:09:32.139 13107.200 - 13164.437: 43.2911% ( 75) 00:09:32.139 13164.437 - 13221.673: 43.8046% ( 47) 00:09:32.139 13221.673 - 13278.910: 44.5039% ( 64) 00:09:32.139 13278.910 - 13336.147: 45.0284% ( 48) 00:09:32.139 13336.147 - 13393.383: 45.5747% ( 50) 00:09:32.139 13393.383 - 13450.620: 46.1429% ( 52) 00:09:32.139 13450.620 - 13507.857: 46.8859% ( 68) 00:09:32.139 13507.857 - 13565.093: 47.6071% ( 66) 00:09:32.139 13565.093 - 13622.330: 48.8527% ( 114) 00:09:32.139 13622.330 - 13679.567: 49.6176% ( 70) 00:09:32.139 13679.567 - 13736.803: 50.3497% ( 67) 00:09:32.139 13736.803 - 13794.040: 51.3877% ( 95) 00:09:32.139 13794.040 - 13851.277: 52.7098% ( 121) 00:09:32.139 13851.277 - 13908.514: 54.0647% ( 124) 00:09:32.139 13908.514 - 13965.750: 55.4633% ( 128) 00:09:32.139 13965.750 - 14022.987: 56.8619% ( 128) 00:09:32.139 14022.987 - 14080.224: 57.9108% ( 96) 00:09:32.139 14080.224 - 14137.460: 59.1455% ( 113) 00:09:32.139 14137.460 - 14194.697: 59.9760% ( 76) 00:09:32.139 14194.697 - 14251.934: 61.1779% ( 110) 00:09:32.139 14251.934 - 14309.170: 62.2050% ( 94) 00:09:32.139 14309.170 - 14366.407: 62.8169% ( 56) 00:09:32.139 14366.407 - 14423.644: 63.4943% ( 62) 00:09:32.139 14423.644 - 14480.880: 64.0625% ( 52) 00:09:32.139 14480.880 - 14538.117: 64.7290% ( 61) 00:09:32.139 14538.117 - 14595.354: 65.6906% ( 88) 00:09:32.139 14595.354 - 14652.590: 66.4663% ( 71) 00:09:32.139 14652.590 - 14767.064: 68.0835% ( 148) 00:09:32.139 14767.064 - 14881.537: 69.6460% ( 143) 00:09:32.139 14881.537 - 14996.010: 71.3942% ( 160) 00:09:32.139 14996.010 - 15110.484: 73.1862% ( 164) 00:09:32.139 15110.484 - 15224.957: 74.8798% ( 155) 00:09:32.139 15224.957 - 15339.431: 76.7264% ( 169) 00:09:32.139 15339.431 - 15453.904: 78.1250% ( 128) 00:09:32.139 15453.904 - 15568.377: 78.9445% ( 75) 00:09:32.139 15568.377 - 15682.851: 79.6001% ( 60) 00:09:32.139 15682.851 - 15797.324: 80.3431% ( 68) 00:09:32.139 15797.324 - 15911.797: 81.1080% ( 70) 00:09:32.139 15911.797 - 16026.271: 81.8400% ( 67) 00:09:32.139 16026.271 - 16140.744: 82.9873% ( 105) 00:09:32.139 16140.744 - 16255.217: 83.6211% ( 58) 00:09:32.139 16255.217 - 16369.691: 84.3531% ( 67) 00:09:32.139 16369.691 - 16484.164: 84.8121% ( 42) 00:09:32.139 16484.164 - 16598.638: 85.4130% ( 55) 00:09:32.139 16598.638 - 16713.111: 85.9921% ( 53) 00:09:32.139 16713.111 - 16827.584: 86.6259% ( 58) 00:09:32.139 16827.584 - 16942.058: 87.3142% ( 63) 00:09:32.139 16942.058 - 17056.531: 87.6748% ( 33) 00:09:32.139 17056.531 - 17171.004: 88.0791% ( 37) 00:09:32.139 17171.004 - 17285.478: 88.8221% ( 68) 00:09:32.139 17285.478 - 17399.951: 89.6525% ( 76) 00:09:32.139 17399.951 - 17514.424: 90.2207% ( 52) 00:09:32.139 17514.424 - 17628.898: 90.6141% ( 36) 00:09:32.139 17628.898 - 17743.371: 90.9856% ( 34) 00:09:32.139 17743.371 - 17857.845: 91.4117% ( 39) 00:09:32.139 17857.845 - 17972.318: 91.9690% ( 51) 00:09:32.139 17972.318 - 18086.791: 92.3077% ( 31) 00:09:32.139 18086.791 - 18201.265: 92.5809% ( 25) 00:09:32.140 18201.265 - 18315.738: 92.7994% ( 20) 00:09:32.140 18315.738 - 18430.211: 93.0288% ( 21) 00:09:32.140 18430.211 - 18544.685: 93.2911% ( 24) 00:09:32.140 18544.685 - 18659.158: 93.8046% ( 47) 00:09:32.140 18659.158 - 18773.631: 94.1434% ( 31) 00:09:32.140 18773.631 - 18888.105: 94.4821% ( 31) 00:09:32.140 18888.105 - 19002.578: 94.9301% ( 41) 00:09:32.140 19002.578 - 19117.052: 95.2251% ( 27) 00:09:32.140 19117.052 - 19231.525: 95.5638% ( 31) 00:09:32.140 19231.525 - 19345.998: 95.8042% ( 22) 00:09:32.140 19345.998 - 19460.472: 96.1211% ( 29) 00:09:32.140 19460.472 - 19574.945: 96.2959% ( 16) 00:09:32.140 19574.945 - 19689.418: 96.4598% ( 15) 00:09:32.140 19689.418 - 19803.892: 96.6128% ( 14) 00:09:32.140 19803.892 - 19918.365: 96.8094% ( 18) 00:09:32.140 19918.365 - 20032.838: 96.9843% ( 16) 00:09:32.140 20032.838 - 20147.312: 97.1372% ( 14) 00:09:32.140 20147.312 - 20261.785: 97.4323% ( 27) 00:09:32.140 20261.785 - 20376.259: 97.5306% ( 9) 00:09:32.140 20376.259 - 20490.732: 97.6071% ( 7) 00:09:32.140 20490.732 - 20605.205: 97.6836% ( 7) 00:09:32.140 20605.205 - 20719.679: 97.7382% ( 5) 00:09:32.140 20719.679 - 20834.152: 97.7601% ( 2) 00:09:32.140 20834.152 - 20948.625: 97.7819% ( 2) 00:09:32.140 20948.625 - 21063.099: 97.8147% ( 3) 00:09:32.140 21063.099 - 21177.572: 97.8475% ( 3) 00:09:32.140 21177.572 - 21292.045: 97.8584% ( 1) 00:09:32.140 21292.045 - 21406.519: 97.9240% ( 6) 00:09:32.140 21406.519 - 21520.992: 98.0114% ( 8) 00:09:32.140 21520.992 - 21635.466: 98.0660% ( 5) 00:09:32.140 21635.466 - 21749.939: 98.1643% ( 9) 00:09:32.140 21749.939 - 21864.412: 98.4594% ( 27) 00:09:32.140 21864.412 - 21978.886: 98.6014% ( 13) 00:09:32.140 28503.867 - 28618.341: 98.6123% ( 1) 00:09:32.140 28618.341 - 28732.814: 98.6451% ( 3) 00:09:32.140 28732.814 - 28847.287: 98.6888% ( 4) 00:09:32.140 28847.287 - 28961.761: 98.7325% ( 4) 00:09:32.140 28961.761 - 29076.234: 98.7653% ( 3) 00:09:32.140 29076.234 - 29190.707: 98.7981% ( 3) 00:09:32.140 29190.707 - 29305.181: 98.8418% ( 4) 00:09:32.140 29305.181 - 29534.128: 98.9073% ( 6) 00:09:32.140 29534.128 - 29763.074: 98.9729% ( 6) 00:09:32.140 29763.074 - 29992.021: 99.0385% ( 6) 00:09:32.140 29992.021 - 30220.968: 99.1149% ( 7) 00:09:32.140 30220.968 - 30449.914: 99.1805% ( 6) 00:09:32.140 30449.914 - 30678.861: 99.2570% ( 7) 00:09:32.140 30678.861 - 30907.808: 99.3007% ( 4) 00:09:32.140 39607.783 - 39836.730: 99.3663% ( 6) 00:09:32.140 39836.730 - 40065.677: 99.4427% ( 7) 00:09:32.140 40065.677 - 40294.624: 99.5302% ( 8) 00:09:32.140 40294.624 - 40523.570: 99.6394% ( 10) 00:09:32.140 40523.570 - 40752.517: 99.7268% ( 8) 00:09:32.140 40752.517 - 40981.464: 99.8361% ( 10) 00:09:32.140 40981.464 - 41210.410: 99.9344% ( 9) 00:09:32.140 41210.410 - 41439.357: 100.0000% ( 6) 00:09:32.140 00:09:32.140 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:32.140 ============================================================================== 00:09:32.140 Range in us Cumulative IO count 00:09:32.140 5466.103 - 5494.721: 0.0219% ( 2) 00:09:32.140 5494.721 - 5523.340: 0.0437% ( 2) 00:09:32.140 5523.340 - 5551.958: 0.0656% ( 2) 00:09:32.140 5551.958 - 5580.576: 0.0874% ( 2) 00:09:32.140 5580.576 - 5609.195: 0.1093% ( 2) 00:09:32.140 5609.195 - 5637.813: 0.1420% ( 3) 00:09:32.140 5637.813 - 5666.431: 0.1967% ( 5) 00:09:32.140 5666.431 - 5695.050: 0.2295% ( 3) 00:09:32.140 5695.050 - 5723.668: 0.2841% ( 5) 00:09:32.140 5723.668 - 5752.286: 0.4043% ( 11) 00:09:32.140 5752.286 - 5780.905: 0.4480% ( 4) 00:09:32.140 5780.905 - 5809.523: 0.4917% ( 4) 00:09:32.140 5809.523 - 5838.141: 0.5354% ( 4) 00:09:32.140 5838.141 - 5866.760: 0.5791% ( 4) 00:09:32.140 5866.760 - 5895.378: 0.6337% ( 5) 00:09:32.140 5895.378 - 5923.997: 0.6665% ( 3) 00:09:32.140 5923.997 - 5952.615: 0.6884% ( 2) 00:09:32.140 5952.615 - 5981.233: 0.6993% ( 1) 00:09:32.140 9043.396 - 9100.632: 0.7212% ( 2) 00:09:32.140 9100.632 - 9157.869: 0.7649% ( 4) 00:09:32.140 9157.869 - 9215.106: 0.8086% ( 4) 00:09:32.140 9215.106 - 9272.342: 0.8741% ( 6) 00:09:32.140 9272.342 - 9329.579: 0.9725% ( 9) 00:09:32.140 9329.579 - 9386.816: 1.1473% ( 16) 00:09:32.140 9386.816 - 9444.052: 1.2238% ( 7) 00:09:32.140 9444.052 - 9501.289: 1.3112% ( 8) 00:09:32.140 9501.289 - 9558.526: 1.3767% ( 6) 00:09:32.140 9558.526 - 9615.762: 1.3986% ( 2) 00:09:32.140 9844.709 - 9901.946: 1.4314% ( 3) 00:09:32.140 9901.946 - 9959.183: 1.5406% ( 10) 00:09:32.140 9959.183 - 10016.419: 1.7592% ( 20) 00:09:32.140 10016.419 - 10073.656: 1.9777% ( 20) 00:09:32.140 10073.656 - 10130.893: 2.3601% ( 35) 00:09:32.140 10130.893 - 10188.129: 2.6770% ( 29) 00:09:32.140 10188.129 - 10245.366: 2.9611% ( 26) 00:09:32.140 10245.366 - 10302.603: 3.1578% ( 18) 00:09:32.140 10302.603 - 10359.839: 3.3982% ( 22) 00:09:32.140 10359.839 - 10417.076: 3.6713% ( 25) 00:09:32.140 10417.076 - 10474.313: 3.9882% ( 29) 00:09:32.140 10474.313 - 10531.549: 4.3269% ( 31) 00:09:32.140 10531.549 - 10588.786: 4.8733% ( 50) 00:09:32.140 10588.786 - 10646.023: 5.6709% ( 73) 00:09:32.140 10646.023 - 10703.259: 6.5013% ( 76) 00:09:32.140 10703.259 - 10760.496: 7.4191% ( 84) 00:09:32.140 10760.496 - 10817.733: 8.6101% ( 109) 00:09:32.140 10817.733 - 10874.969: 9.9541% ( 123) 00:09:32.140 10874.969 - 10932.206: 11.1451% ( 109) 00:09:32.140 10932.206 - 10989.443: 12.4017% ( 115) 00:09:32.140 10989.443 - 11046.679: 13.8877% ( 136) 00:09:32.140 11046.679 - 11103.916: 15.2972% ( 129) 00:09:32.140 11103.916 - 11161.153: 16.8160% ( 139) 00:09:32.140 11161.153 - 11218.390: 17.9524% ( 104) 00:09:32.140 11218.390 - 11275.626: 19.2854% ( 122) 00:09:32.140 11275.626 - 11332.863: 20.3890% ( 101) 00:09:32.140 11332.863 - 11390.100: 21.4161% ( 94) 00:09:32.140 11390.100 - 11447.336: 22.3776% ( 88) 00:09:32.140 11447.336 - 11504.573: 23.2299% ( 78) 00:09:32.140 11504.573 - 11561.810: 24.1149% ( 81) 00:09:32.140 11561.810 - 11619.046: 25.1420% ( 94) 00:09:32.140 11619.046 - 11676.283: 26.0708% ( 85) 00:09:32.140 11676.283 - 11733.520: 27.1525% ( 99) 00:09:32.140 11733.520 - 11790.756: 28.1578% ( 92) 00:09:32.140 11790.756 - 11847.993: 28.8789% ( 66) 00:09:32.140 11847.993 - 11905.230: 29.6656% ( 72) 00:09:32.140 11905.230 - 11962.466: 30.7365% ( 98) 00:09:32.140 11962.466 - 12019.703: 31.7308% ( 91) 00:09:32.140 12019.703 - 12076.940: 32.4847% ( 69) 00:09:32.140 12076.940 - 12134.176: 33.1075% ( 57) 00:09:32.140 12134.176 - 12191.413: 33.7085% ( 55) 00:09:32.140 12191.413 - 12248.650: 34.3641% ( 60) 00:09:32.140 12248.650 - 12305.886: 35.0087% ( 59) 00:09:32.140 12305.886 - 12363.123: 35.5988% ( 54) 00:09:32.140 12363.123 - 12420.360: 35.8938% ( 27) 00:09:32.140 12420.360 - 12477.597: 36.2434% ( 32) 00:09:32.140 12477.597 - 12534.833: 36.6259% ( 35) 00:09:32.140 12534.833 - 12592.070: 37.0520% ( 39) 00:09:32.140 12592.070 - 12649.307: 37.5109% ( 42) 00:09:32.140 12649.307 - 12706.543: 38.1447% ( 58) 00:09:32.140 12706.543 - 12763.780: 38.7019% ( 51) 00:09:32.140 12763.780 - 12821.017: 39.4122% ( 65) 00:09:32.140 12821.017 - 12878.253: 39.8711% ( 42) 00:09:32.140 12878.253 - 12935.490: 40.3737% ( 46) 00:09:32.140 12935.490 - 12992.727: 40.9747% ( 55) 00:09:32.140 12992.727 - 13049.963: 41.5647% ( 54) 00:09:32.140 13049.963 - 13107.200: 41.9362% ( 34) 00:09:32.140 13107.200 - 13164.437: 42.2421% ( 28) 00:09:32.140 13164.437 - 13221.673: 42.9633% ( 66) 00:09:32.140 13221.673 - 13278.910: 43.4659% ( 46) 00:09:32.140 13278.910 - 13336.147: 43.8483% ( 35) 00:09:32.140 13336.147 - 13393.383: 44.4165% ( 52) 00:09:32.140 13393.383 - 13450.620: 44.8536% ( 40) 00:09:32.140 13450.620 - 13507.857: 45.5310% ( 62) 00:09:32.140 13507.857 - 13565.093: 46.5253% ( 91) 00:09:32.140 13565.093 - 13622.330: 47.3011% ( 71) 00:09:32.140 13622.330 - 13679.567: 48.4156% ( 102) 00:09:32.140 13679.567 - 13736.803: 49.7705% ( 124) 00:09:32.140 13736.803 - 13794.040: 51.5188% ( 160) 00:09:32.140 13794.040 - 13851.277: 52.9939% ( 135) 00:09:32.140 13851.277 - 13908.514: 54.3269% ( 122) 00:09:32.140 13908.514 - 13965.750: 55.4633% ( 104) 00:09:32.140 13965.750 - 14022.987: 56.7417% ( 117) 00:09:32.140 14022.987 - 14080.224: 57.7906% ( 96) 00:09:32.140 14080.224 - 14137.460: 58.8068% ( 93) 00:09:32.140 14137.460 - 14194.697: 59.8011% ( 91) 00:09:32.140 14194.697 - 14251.934: 61.2107% ( 129) 00:09:32.140 14251.934 - 14309.170: 62.1394% ( 85) 00:09:32.140 14309.170 - 14366.407: 63.2867% ( 105) 00:09:32.140 14366.407 - 14423.644: 64.4340% ( 105) 00:09:32.140 14423.644 - 14480.880: 65.4174% ( 90) 00:09:32.140 14480.880 - 14538.117: 66.3571% ( 86) 00:09:32.140 14538.117 - 14595.354: 67.2858% ( 85) 00:09:32.140 14595.354 - 14652.590: 68.1600% ( 80) 00:09:32.140 14652.590 - 14767.064: 70.1923% ( 186) 00:09:32.140 14767.064 - 14881.537: 71.4379% ( 114) 00:09:32.140 14881.537 - 14996.010: 72.5306% ( 100) 00:09:32.140 14996.010 - 15110.484: 73.4047% ( 80) 00:09:32.140 15110.484 - 15224.957: 74.0494% ( 59) 00:09:32.141 15224.957 - 15339.431: 74.8689% ( 75) 00:09:32.141 15339.431 - 15453.904: 75.8086% ( 86) 00:09:32.141 15453.904 - 15568.377: 77.4366% ( 149) 00:09:32.141 15568.377 - 15682.851: 78.7369% ( 119) 00:09:32.141 15682.851 - 15797.324: 79.9497% ( 111) 00:09:32.141 15797.324 - 15911.797: 81.1407% ( 109) 00:09:32.141 15911.797 - 16026.271: 82.1023% ( 88) 00:09:32.141 16026.271 - 16140.744: 83.3807% ( 117) 00:09:32.141 16140.744 - 16255.217: 84.3641% ( 90) 00:09:32.141 16255.217 - 16369.691: 85.1508% ( 72) 00:09:32.141 16369.691 - 16484.164: 85.8173% ( 61) 00:09:32.141 16484.164 - 16598.638: 86.3418% ( 48) 00:09:32.141 16598.638 - 16713.111: 87.0739% ( 67) 00:09:32.141 16713.111 - 16827.584: 87.6530% ( 53) 00:09:32.141 16827.584 - 16942.058: 88.0135% ( 33) 00:09:32.141 16942.058 - 17056.531: 88.2649% ( 23) 00:09:32.141 17056.531 - 17171.004: 88.6364% ( 34) 00:09:32.141 17171.004 - 17285.478: 89.0625% ( 39) 00:09:32.141 17285.478 - 17399.951: 89.7837% ( 66) 00:09:32.141 17399.951 - 17514.424: 90.1661% ( 35) 00:09:32.141 17514.424 - 17628.898: 90.7015% ( 49) 00:09:32.141 17628.898 - 17743.371: 90.9747% ( 25) 00:09:32.141 17743.371 - 17857.845: 91.1932% ( 20) 00:09:32.141 17857.845 - 17972.318: 91.4226% ( 21) 00:09:32.141 17972.318 - 18086.791: 91.7395% ( 29) 00:09:32.141 18086.791 - 18201.265: 92.2421% ( 46) 00:09:32.141 18201.265 - 18315.738: 92.7120% ( 43) 00:09:32.141 18315.738 - 18430.211: 92.9633% ( 23) 00:09:32.141 18430.211 - 18544.685: 93.2474% ( 26) 00:09:32.141 18544.685 - 18659.158: 93.6407% ( 36) 00:09:32.141 18659.158 - 18773.631: 94.1215% ( 44) 00:09:32.141 18773.631 - 18888.105: 94.3837% ( 24) 00:09:32.141 18888.105 - 19002.578: 94.6351% ( 23) 00:09:32.141 19002.578 - 19117.052: 94.8973% ( 24) 00:09:32.141 19117.052 - 19231.525: 95.1267% ( 21) 00:09:32.141 19231.525 - 19345.998: 95.4436% ( 29) 00:09:32.141 19345.998 - 19460.472: 95.8151% ( 34) 00:09:32.141 19460.472 - 19574.945: 96.1976% ( 35) 00:09:32.141 19574.945 - 19689.418: 96.4707% ( 25) 00:09:32.141 19689.418 - 19803.892: 96.7657% ( 27) 00:09:32.141 19803.892 - 19918.365: 96.9624% ( 18) 00:09:32.141 19918.365 - 20032.838: 97.1591% ( 18) 00:09:32.141 20032.838 - 20147.312: 97.3339% ( 16) 00:09:32.141 20147.312 - 20261.785: 97.5743% ( 22) 00:09:32.141 20261.785 - 20376.259: 97.8475% ( 25) 00:09:32.141 20376.259 - 20490.732: 98.0660% ( 20) 00:09:32.141 20490.732 - 20605.205: 98.2299% ( 15) 00:09:32.141 20605.205 - 20719.679: 98.3392% ( 10) 00:09:32.141 20719.679 - 20834.152: 98.4266% ( 8) 00:09:32.141 20834.152 - 20948.625: 98.4812% ( 5) 00:09:32.141 20948.625 - 21063.099: 98.5358% ( 5) 00:09:32.141 21063.099 - 21177.572: 98.5795% ( 4) 00:09:32.141 21177.572 - 21292.045: 98.6014% ( 2) 00:09:32.141 28961.761 - 29076.234: 98.6233% ( 2) 00:09:32.141 29076.234 - 29190.707: 98.6670% ( 4) 00:09:32.141 29190.707 - 29305.181: 98.7107% ( 4) 00:09:32.141 29305.181 - 29534.128: 98.7981% ( 8) 00:09:32.141 29534.128 - 29763.074: 98.8964% ( 9) 00:09:32.141 29763.074 - 29992.021: 98.9620% ( 6) 00:09:32.141 29992.021 - 30220.968: 99.0166% ( 5) 00:09:32.141 30220.968 - 30449.914: 99.0822% ( 6) 00:09:32.141 30449.914 - 30678.861: 99.1587% ( 7) 00:09:32.141 30678.861 - 30907.808: 99.2351% ( 7) 00:09:32.141 30907.808 - 31136.755: 99.2898% ( 5) 00:09:32.141 31136.755 - 31365.701: 99.3007% ( 1) 00:09:32.141 39607.783 - 39836.730: 99.3772% ( 7) 00:09:32.141 39836.730 - 40065.677: 99.4755% ( 9) 00:09:32.141 40065.677 - 40294.624: 99.6066% ( 12) 00:09:32.141 40294.624 - 40523.570: 99.8580% ( 23) 00:09:32.141 40523.570 - 40752.517: 99.9454% ( 8) 00:09:32.141 40752.517 - 40981.464: 100.0000% ( 5) 00:09:32.141 00:09:32.141 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:32.141 ============================================================================== 00:09:32.141 Range in us Cumulative IO count 00:09:32.141 5122.683 - 5151.301: 0.0109% ( 1) 00:09:32.141 5294.393 - 5323.011: 0.0328% ( 2) 00:09:32.141 5323.011 - 5351.630: 0.1093% ( 7) 00:09:32.141 5351.630 - 5380.248: 0.1967% ( 8) 00:09:32.141 5380.248 - 5408.866: 0.3059% ( 10) 00:09:32.141 5408.866 - 5437.485: 0.3715% ( 6) 00:09:32.141 5437.485 - 5466.103: 0.5026% ( 12) 00:09:32.141 5466.103 - 5494.721: 0.5245% ( 2) 00:09:32.141 5494.721 - 5523.340: 0.5463% ( 2) 00:09:32.141 5523.340 - 5551.958: 0.5573% ( 1) 00:09:32.141 5551.958 - 5580.576: 0.5791% ( 2) 00:09:32.141 5580.576 - 5609.195: 0.6010% ( 2) 00:09:32.141 5609.195 - 5637.813: 0.6228% ( 2) 00:09:32.141 5637.813 - 5666.431: 0.6447% ( 2) 00:09:32.141 5666.431 - 5695.050: 0.6665% ( 2) 00:09:32.141 5695.050 - 5723.668: 0.6884% ( 2) 00:09:32.141 5723.668 - 5752.286: 0.6993% ( 1) 00:09:32.141 8471.029 - 8528.266: 0.7102% ( 1) 00:09:32.141 8528.266 - 8585.502: 0.7321% ( 2) 00:09:32.141 8585.502 - 8642.739: 0.8086% ( 7) 00:09:32.141 8642.739 - 8699.976: 0.9069% ( 9) 00:09:32.141 8699.976 - 8757.212: 1.0380% ( 12) 00:09:32.141 8757.212 - 8814.449: 1.1801% ( 13) 00:09:32.141 8814.449 - 8871.686: 1.2566% ( 7) 00:09:32.141 8871.686 - 8928.922: 1.3003% ( 4) 00:09:32.141 8928.922 - 8986.159: 1.3330% ( 3) 00:09:32.141 8986.159 - 9043.396: 1.3767% ( 4) 00:09:32.141 9043.396 - 9100.632: 1.3986% ( 2) 00:09:32.141 9787.472 - 9844.709: 1.4095% ( 1) 00:09:32.141 9844.709 - 9901.946: 1.4205% ( 1) 00:09:32.141 9901.946 - 9959.183: 1.4532% ( 3) 00:09:32.141 9959.183 - 10016.419: 1.6718% ( 20) 00:09:32.141 10016.419 - 10073.656: 1.8794% ( 19) 00:09:32.141 10073.656 - 10130.893: 2.1744% ( 27) 00:09:32.141 10130.893 - 10188.129: 2.6005% ( 39) 00:09:32.141 10188.129 - 10245.366: 3.0157% ( 38) 00:09:32.141 10245.366 - 10302.603: 3.2998% ( 26) 00:09:32.141 10302.603 - 10359.839: 3.7041% ( 37) 00:09:32.141 10359.839 - 10417.076: 4.1302% ( 39) 00:09:32.141 10417.076 - 10474.313: 4.4143% ( 26) 00:09:32.141 10474.313 - 10531.549: 4.8405% ( 39) 00:09:32.141 10531.549 - 10588.786: 5.3322% ( 45) 00:09:32.141 10588.786 - 10646.023: 5.9768% ( 59) 00:09:32.141 10646.023 - 10703.259: 7.0586% ( 99) 00:09:32.141 10703.259 - 10760.496: 8.0966% ( 95) 00:09:32.141 10760.496 - 10817.733: 9.4733% ( 126) 00:09:32.141 10817.733 - 10874.969: 10.5441% ( 98) 00:09:32.141 10874.969 - 10932.206: 11.8335% ( 118) 00:09:32.141 10932.206 - 10989.443: 13.2758% ( 132) 00:09:32.141 10989.443 - 11046.679: 14.7509% ( 135) 00:09:32.141 11046.679 - 11103.916: 16.2150% ( 134) 00:09:32.141 11103.916 - 11161.153: 17.5699% ( 124) 00:09:32.141 11161.153 - 11218.390: 18.7172% ( 105) 00:09:32.141 11218.390 - 11275.626: 19.8317% ( 102) 00:09:32.141 11275.626 - 11332.863: 20.8698% ( 95) 00:09:32.141 11332.863 - 11390.100: 21.9843% ( 102) 00:09:32.141 11390.100 - 11447.336: 22.9786% ( 91) 00:09:32.141 11447.336 - 11504.573: 23.8527% ( 80) 00:09:32.141 11504.573 - 11561.810: 24.7268% ( 80) 00:09:32.141 11561.810 - 11619.046: 25.6010% ( 80) 00:09:32.141 11619.046 - 11676.283: 26.5297% ( 85) 00:09:32.141 11676.283 - 11733.520: 27.3492% ( 75) 00:09:32.141 11733.520 - 11790.756: 28.2670% ( 84) 00:09:32.141 11790.756 - 11847.993: 28.9663% ( 64) 00:09:32.141 11847.993 - 11905.230: 29.8077% ( 77) 00:09:32.141 11905.230 - 11962.466: 31.0533% ( 114) 00:09:32.141 11962.466 - 12019.703: 32.0367% ( 90) 00:09:32.141 12019.703 - 12076.940: 32.8999% ( 79) 00:09:32.141 12076.940 - 12134.176: 33.6538% ( 69) 00:09:32.141 12134.176 - 12191.413: 34.1237% ( 43) 00:09:32.141 12191.413 - 12248.650: 34.5170% ( 36) 00:09:32.141 12248.650 - 12305.886: 34.8885% ( 34) 00:09:32.141 12305.886 - 12363.123: 35.4567% ( 52) 00:09:32.141 12363.123 - 12420.360: 36.0795% ( 57) 00:09:32.141 12420.360 - 12477.597: 36.3855% ( 28) 00:09:32.141 12477.597 - 12534.833: 36.6696% ( 26) 00:09:32.141 12534.833 - 12592.070: 37.0192% ( 32) 00:09:32.141 12592.070 - 12649.307: 37.4563% ( 40) 00:09:32.141 12649.307 - 12706.543: 37.9808% ( 48) 00:09:32.141 12706.543 - 12763.780: 38.2758% ( 27) 00:09:32.141 12763.780 - 12821.017: 38.5599% ( 26) 00:09:32.141 12821.017 - 12878.253: 39.0734% ( 47) 00:09:32.141 12878.253 - 12935.490: 39.3247% ( 23) 00:09:32.141 12935.490 - 12992.727: 39.6307% ( 28) 00:09:32.141 12992.727 - 13049.963: 40.0896% ( 42) 00:09:32.141 13049.963 - 13107.200: 40.6141% ( 48) 00:09:32.141 13107.200 - 13164.437: 41.3134% ( 64) 00:09:32.141 13164.437 - 13221.673: 41.6958% ( 35) 00:09:32.141 13221.673 - 13278.910: 42.3405% ( 59) 00:09:32.141 13278.910 - 13336.147: 43.1053% ( 70) 00:09:32.141 13336.147 - 13393.383: 43.7281% ( 57) 00:09:32.141 13393.383 - 13450.620: 44.4165% ( 63) 00:09:32.141 13450.620 - 13507.857: 45.0503% ( 58) 00:09:32.141 13507.857 - 13565.093: 45.7277% ( 62) 00:09:32.141 13565.093 - 13622.330: 46.9187% ( 109) 00:09:32.141 13622.330 - 13679.567: 48.4594% ( 141) 00:09:32.141 13679.567 - 13736.803: 50.0437% ( 145) 00:09:32.141 13736.803 - 13794.040: 51.7373% ( 155) 00:09:32.141 13794.040 - 13851.277: 53.4637% ( 158) 00:09:32.141 13851.277 - 13908.514: 54.7531% ( 118) 00:09:32.141 13908.514 - 13965.750: 56.0861% ( 122) 00:09:32.141 13965.750 - 14022.987: 57.4191% ( 122) 00:09:32.141 14022.987 - 14080.224: 58.4899% ( 98) 00:09:32.141 14080.224 - 14137.460: 59.2220% ( 67) 00:09:32.141 14137.460 - 14194.697: 59.9323% ( 65) 00:09:32.141 14194.697 - 14251.934: 60.8719% ( 86) 00:09:32.141 14251.934 - 14309.170: 61.7133% ( 77) 00:09:32.141 14309.170 - 14366.407: 62.6530% ( 86) 00:09:32.141 14366.407 - 14423.644: 64.1827% ( 140) 00:09:32.141 14423.644 - 14480.880: 65.4065% ( 112) 00:09:32.141 14480.880 - 14538.117: 66.4554% ( 96) 00:09:32.141 14538.117 - 14595.354: 67.5481% ( 100) 00:09:32.141 14595.354 - 14652.590: 68.5315% ( 90) 00:09:32.141 14652.590 - 14767.064: 69.6897% ( 106) 00:09:32.142 14767.064 - 14881.537: 70.7386% ( 96) 00:09:32.142 14881.537 - 14996.010: 71.7330% ( 91) 00:09:32.142 14996.010 - 15110.484: 72.4869% ( 69) 00:09:32.142 15110.484 - 15224.957: 73.8418% ( 124) 00:09:32.142 15224.957 - 15339.431: 75.0109% ( 107) 00:09:32.142 15339.431 - 15453.904: 75.9834% ( 89) 00:09:32.142 15453.904 - 15568.377: 77.0979% ( 102) 00:09:32.142 15568.377 - 15682.851: 79.0428% ( 178) 00:09:32.142 15682.851 - 15797.324: 80.3212% ( 117) 00:09:32.142 15797.324 - 15911.797: 81.2391% ( 84) 00:09:32.142 15911.797 - 16026.271: 82.2115% ( 89) 00:09:32.142 16026.271 - 16140.744: 83.2059% ( 91) 00:09:32.142 16140.744 - 16255.217: 83.9926% ( 72) 00:09:32.142 16255.217 - 16369.691: 84.4952% ( 46) 00:09:32.142 16369.691 - 16484.164: 85.1726% ( 62) 00:09:32.142 16484.164 - 16598.638: 85.5660% ( 36) 00:09:32.142 16598.638 - 16713.111: 86.1342% ( 52) 00:09:32.142 16713.111 - 16827.584: 86.8444% ( 65) 00:09:32.142 16827.584 - 16942.058: 87.3252% ( 44) 00:09:32.142 16942.058 - 17056.531: 87.7513% ( 39) 00:09:32.142 17056.531 - 17171.004: 88.4397% ( 63) 00:09:32.142 17171.004 - 17285.478: 89.1171% ( 62) 00:09:32.142 17285.478 - 17399.951: 89.4996% ( 35) 00:09:32.142 17399.951 - 17514.424: 89.9038% ( 37) 00:09:32.142 17514.424 - 17628.898: 90.1879% ( 26) 00:09:32.142 17628.898 - 17743.371: 90.8435% ( 60) 00:09:32.142 17743.371 - 17857.845: 91.5975% ( 69) 00:09:32.142 17857.845 - 17972.318: 92.1766% ( 53) 00:09:32.142 17972.318 - 18086.791: 92.5809% ( 37) 00:09:32.142 18086.791 - 18201.265: 92.9305% ( 32) 00:09:32.142 18201.265 - 18315.738: 93.2365% ( 28) 00:09:32.142 18315.738 - 18430.211: 93.5424% ( 28) 00:09:32.142 18430.211 - 18544.685: 93.9030% ( 33) 00:09:32.142 18544.685 - 18659.158: 94.4493% ( 50) 00:09:32.142 18659.158 - 18773.631: 94.8536% ( 37) 00:09:32.142 18773.631 - 18888.105: 95.1923% ( 31) 00:09:32.142 18888.105 - 19002.578: 95.5201% ( 30) 00:09:32.142 19002.578 - 19117.052: 95.8151% ( 27) 00:09:32.142 19117.052 - 19231.525: 95.9790% ( 15) 00:09:32.142 19231.525 - 19345.998: 96.1429% ( 15) 00:09:32.142 19345.998 - 19460.472: 96.4598% ( 29) 00:09:32.142 19460.472 - 19574.945: 96.7111% ( 23) 00:09:32.142 19574.945 - 19689.418: 96.9952% ( 26) 00:09:32.142 19689.418 - 19803.892: 97.2684% ( 25) 00:09:32.142 19803.892 - 19918.365: 97.5087% ( 22) 00:09:32.142 19918.365 - 20032.838: 97.7163% ( 19) 00:09:32.142 20032.838 - 20147.312: 97.9240% ( 19) 00:09:32.142 20147.312 - 20261.785: 98.1862% ( 24) 00:09:32.142 20261.785 - 20376.259: 98.3719% ( 17) 00:09:32.142 20376.259 - 20490.732: 98.4703% ( 9) 00:09:32.142 20490.732 - 20605.205: 98.5577% ( 8) 00:09:32.142 20605.205 - 20719.679: 98.5795% ( 2) 00:09:32.142 20719.679 - 20834.152: 98.6014% ( 2) 00:09:32.142 29190.707 - 29305.181: 98.6123% ( 1) 00:09:32.142 29305.181 - 29534.128: 98.6233% ( 1) 00:09:32.142 29534.128 - 29763.074: 98.7107% ( 8) 00:09:32.142 29763.074 - 29992.021: 98.7872% ( 7) 00:09:32.142 29992.021 - 30220.968: 98.8746% ( 8) 00:09:32.142 30220.968 - 30449.914: 98.9510% ( 7) 00:09:32.142 30449.914 - 30678.861: 99.0275% ( 7) 00:09:32.142 30678.861 - 30907.808: 99.1040% ( 7) 00:09:32.142 30907.808 - 31136.755: 99.1696% ( 6) 00:09:32.142 31136.755 - 31365.701: 99.2351% ( 6) 00:09:32.142 31365.701 - 31594.648: 99.3007% ( 6) 00:09:32.142 38691.997 - 38920.943: 99.3116% ( 1) 00:09:32.142 38920.943 - 39149.890: 99.3335% ( 2) 00:09:32.142 39149.890 - 39378.837: 99.3553% ( 2) 00:09:32.142 39378.837 - 39607.783: 99.3881% ( 3) 00:09:32.142 39607.783 - 39836.730: 99.4427% ( 5) 00:09:32.142 39836.730 - 40065.677: 99.5302% ( 8) 00:09:32.142 40065.677 - 40294.624: 99.6066% ( 7) 00:09:32.142 40294.624 - 40523.570: 99.6831% ( 7) 00:09:32.142 40523.570 - 40752.517: 99.7378% ( 5) 00:09:32.142 40752.517 - 40981.464: 99.7924% ( 5) 00:09:32.142 40981.464 - 41210.410: 99.8470% ( 5) 00:09:32.142 41210.410 - 41439.357: 99.8907% ( 4) 00:09:32.142 41439.357 - 41668.304: 99.9563% ( 6) 00:09:32.142 41668.304 - 41897.251: 100.0000% ( 4) 00:09:32.142 00:09:32.401 11:51:31 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:32.401 00:09:32.401 real 0m2.529s 00:09:32.401 user 0m2.176s 00:09:32.401 sys 0m0.252s 00:09:32.401 11:51:31 nvme.nvme_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:32.401 11:51:31 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:32.401 ************************************ 00:09:32.401 END TEST nvme_perf 00:09:32.401 ************************************ 00:09:32.401 11:51:31 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:32.401 11:51:31 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:09:32.401 11:51:31 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:32.401 11:51:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:32.401 ************************************ 00:09:32.401 START TEST nvme_hello_world 00:09:32.401 ************************************ 00:09:32.401 11:51:31 nvme.nvme_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:32.659 Initializing NVMe Controllers 00:09:32.659 Attached to 0000:00:10.0 00:09:32.659 Namespace ID: 1 size: 6GB 00:09:32.659 Attached to 0000:00:11.0 00:09:32.659 Namespace ID: 1 size: 5GB 00:09:32.659 Attached to 0000:00:13.0 00:09:32.659 Namespace ID: 1 size: 1GB 00:09:32.659 Attached to 0000:00:12.0 00:09:32.659 Namespace ID: 1 size: 4GB 00:09:32.659 Namespace ID: 2 size: 4GB 00:09:32.659 Namespace ID: 3 size: 4GB 00:09:32.659 Initialization complete. 00:09:32.659 INFO: using host memory buffer for IO 00:09:32.659 Hello world! 00:09:32.659 INFO: using host memory buffer for IO 00:09:32.659 Hello world! 00:09:32.659 INFO: using host memory buffer for IO 00:09:32.659 Hello world! 00:09:32.659 INFO: using host memory buffer for IO 00:09:32.659 Hello world! 00:09:32.659 INFO: using host memory buffer for IO 00:09:32.659 Hello world! 00:09:32.659 INFO: using host memory buffer for IO 00:09:32.659 Hello world! 00:09:32.659 00:09:32.659 real 0m0.252s 00:09:32.659 user 0m0.088s 00:09:32.659 sys 0m0.118s 00:09:32.659 11:51:31 nvme.nvme_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:32.659 11:51:31 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:32.659 ************************************ 00:09:32.659 END TEST nvme_hello_world 00:09:32.659 ************************************ 00:09:32.659 11:51:31 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:32.659 11:51:31 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:32.659 11:51:31 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:32.659 11:51:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:32.659 ************************************ 00:09:32.660 START TEST nvme_sgl 00:09:32.660 ************************************ 00:09:32.660 11:51:31 nvme.nvme_sgl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:32.920 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:32.920 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:32.920 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:32.920 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:32.920 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:32.920 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:32.920 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:32.920 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:32.920 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:32.920 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:32.920 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:32.920 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:32.920 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:32.920 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:32.920 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:32.920 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:32.920 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:32.920 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:32.920 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:32.920 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:32.920 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:32.920 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:32.920 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:32.920 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:32.920 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:32.920 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:32.920 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:32.920 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:32.920 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:32.920 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:32.920 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:32.920 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:32.920 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:32.920 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:32.920 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:32.920 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:32.920 NVMe Readv/Writev Request test 00:09:32.920 Attached to 0000:00:10.0 00:09:32.920 Attached to 0000:00:11.0 00:09:32.920 Attached to 0000:00:13.0 00:09:32.920 Attached to 0000:00:12.0 00:09:32.920 0000:00:10.0: build_io_request_2 test passed 00:09:32.920 0000:00:10.0: build_io_request_4 test passed 00:09:32.920 0000:00:10.0: build_io_request_5 test passed 00:09:32.920 0000:00:10.0: build_io_request_6 test passed 00:09:32.920 0000:00:10.0: build_io_request_7 test passed 00:09:32.920 0000:00:10.0: build_io_request_10 test passed 00:09:32.920 0000:00:11.0: build_io_request_2 test passed 00:09:32.920 0000:00:11.0: build_io_request_4 test passed 00:09:32.920 0000:00:11.0: build_io_request_5 test passed 00:09:32.920 0000:00:11.0: build_io_request_6 test passed 00:09:32.920 0000:00:11.0: build_io_request_7 test passed 00:09:32.920 0000:00:11.0: build_io_request_10 test passed 00:09:32.920 Cleaning up... 00:09:32.920 00:09:32.920 real 0m0.306s 00:09:32.920 user 0m0.142s 00:09:32.920 sys 0m0.121s 00:09:32.920 11:51:31 nvme.nvme_sgl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:32.920 11:51:31 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:32.920 ************************************ 00:09:32.920 END TEST nvme_sgl 00:09:32.920 ************************************ 00:09:32.920 11:51:31 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:32.920 11:51:31 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:32.920 11:51:31 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:32.920 11:51:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:32.920 ************************************ 00:09:32.920 START TEST nvme_e2edp 00:09:32.920 ************************************ 00:09:32.920 11:51:31 nvme.nvme_e2edp -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:33.180 NVMe Write/Read with End-to-End data protection test 00:09:33.180 Attached to 0000:00:10.0 00:09:33.180 Attached to 0000:00:11.0 00:09:33.180 Attached to 0000:00:13.0 00:09:33.180 Attached to 0000:00:12.0 00:09:33.180 Cleaning up... 00:09:33.180 00:09:33.180 real 0m0.256s 00:09:33.180 user 0m0.082s 00:09:33.180 sys 0m0.128s 00:09:33.180 11:51:32 nvme.nvme_e2edp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:33.180 11:51:32 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:33.180 ************************************ 00:09:33.180 END TEST nvme_e2edp 00:09:33.180 ************************************ 00:09:33.440 11:51:32 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:33.440 11:51:32 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:33.440 11:51:32 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:33.440 11:51:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.440 ************************************ 00:09:33.440 START TEST nvme_reserve 00:09:33.440 ************************************ 00:09:33.440 11:51:32 nvme.nvme_reserve -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:33.699 ===================================================== 00:09:33.699 NVMe Controller at PCI bus 0, device 16, function 0 00:09:33.699 ===================================================== 00:09:33.699 Reservations: Not Supported 00:09:33.700 ===================================================== 00:09:33.700 NVMe Controller at PCI bus 0, device 17, function 0 00:09:33.700 ===================================================== 00:09:33.700 Reservations: Not Supported 00:09:33.700 ===================================================== 00:09:33.700 NVMe Controller at PCI bus 0, device 19, function 0 00:09:33.700 ===================================================== 00:09:33.700 Reservations: Not Supported 00:09:33.700 ===================================================== 00:09:33.700 NVMe Controller at PCI bus 0, device 18, function 0 00:09:33.700 ===================================================== 00:09:33.700 Reservations: Not Supported 00:09:33.700 Reservation test passed 00:09:33.700 00:09:33.700 real 0m0.241s 00:09:33.700 user 0m0.088s 00:09:33.700 sys 0m0.113s 00:09:33.700 11:51:32 nvme.nvme_reserve -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:33.700 11:51:32 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:33.700 ************************************ 00:09:33.700 END TEST nvme_reserve 00:09:33.700 ************************************ 00:09:33.700 11:51:32 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:33.700 11:51:32 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:33.700 11:51:32 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:33.700 11:51:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.700 ************************************ 00:09:33.700 START TEST nvme_err_injection 00:09:33.700 ************************************ 00:09:33.700 11:51:32 nvme.nvme_err_injection -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:33.965 NVMe Error Injection test 00:09:33.965 Attached to 0000:00:10.0 00:09:33.965 Attached to 0000:00:11.0 00:09:33.965 Attached to 0000:00:13.0 00:09:33.965 Attached to 0000:00:12.0 00:09:33.965 0000:00:10.0: get features failed as expected 00:09:33.965 0000:00:11.0: get features failed as expected 00:09:33.965 0000:00:13.0: get features failed as expected 00:09:33.965 0000:00:12.0: get features failed as expected 00:09:33.965 0000:00:10.0: get features successfully as expected 00:09:33.965 0000:00:11.0: get features successfully as expected 00:09:33.965 0000:00:13.0: get features successfully as expected 00:09:33.965 0000:00:12.0: get features successfully as expected 00:09:33.965 0000:00:10.0: read failed as expected 00:09:33.965 0000:00:11.0: read failed as expected 00:09:33.965 0000:00:13.0: read failed as expected 00:09:33.965 0000:00:12.0: read failed as expected 00:09:33.965 0000:00:11.0: read successfully as expected 00:09:33.965 0000:00:10.0: read successfully as expected 00:09:33.965 0000:00:13.0: read successfully as expected 00:09:33.965 0000:00:12.0: read successfully as expected 00:09:33.965 Cleaning up... 00:09:33.965 00:09:33.965 real 0m0.250s 00:09:33.965 user 0m0.081s 00:09:33.965 sys 0m0.127s 00:09:33.965 11:51:32 nvme.nvme_err_injection -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:33.965 11:51:32 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:33.965 ************************************ 00:09:33.965 END TEST nvme_err_injection 00:09:33.965 ************************************ 00:09:33.965 11:51:32 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:33.965 11:51:32 nvme -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:09:33.965 11:51:32 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:33.965 11:51:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:33.965 ************************************ 00:09:33.965 START TEST nvme_overhead 00:09:33.965 ************************************ 00:09:33.965 11:51:32 nvme.nvme_overhead -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:35.367 Initializing NVMe Controllers 00:09:35.367 Attached to 0000:00:10.0 00:09:35.367 Attached to 0000:00:11.0 00:09:35.367 Attached to 0000:00:13.0 00:09:35.367 Attached to 0000:00:12.0 00:09:35.367 Initialization complete. Launching workers. 00:09:35.367 submit (in ns) avg, min, max = 13036.4, 9407.9, 111316.2 00:09:35.367 complete (in ns) avg, min, max = 7937.5, 6161.6, 653598.3 00:09:35.367 00:09:35.367 Submit histogram 00:09:35.367 ================ 00:09:35.367 Range in us Cumulative Count 00:09:35.367 9.390 - 9.446: 0.0104% ( 1) 00:09:35.367 9.782 - 9.838: 0.0207% ( 1) 00:09:35.367 10.117 - 10.173: 0.0311% ( 1) 00:09:35.367 10.564 - 10.620: 0.0415% ( 1) 00:09:35.367 10.676 - 10.732: 0.0519% ( 1) 00:09:35.367 10.732 - 10.788: 0.0830% ( 3) 00:09:35.367 10.788 - 10.844: 0.1141% ( 3) 00:09:35.367 10.844 - 10.900: 0.1659% ( 5) 00:09:35.367 10.900 - 10.955: 0.2074% ( 4) 00:09:35.367 10.955 - 11.011: 0.2593% ( 5) 00:09:35.367 11.011 - 11.067: 0.3941% ( 13) 00:09:35.367 11.067 - 11.123: 0.5704% ( 17) 00:09:35.367 11.123 - 11.179: 0.8711% ( 29) 00:09:35.367 11.179 - 11.235: 1.1718% ( 29) 00:09:35.367 11.235 - 11.291: 1.5970% ( 41) 00:09:35.367 11.291 - 11.347: 2.2089% ( 59) 00:09:35.367 11.347 - 11.403: 2.6859% ( 46) 00:09:35.367 11.403 - 11.459: 3.2874% ( 58) 00:09:35.367 11.459 - 11.514: 3.9511% ( 64) 00:09:35.367 11.514 - 11.570: 4.9673% ( 98) 00:09:35.367 11.570 - 11.626: 6.3466% ( 133) 00:09:35.367 11.626 - 11.682: 7.8917% ( 149) 00:09:35.367 11.682 - 11.738: 9.7584% ( 180) 00:09:35.367 11.738 - 11.794: 11.9465% ( 211) 00:09:35.367 11.794 - 11.850: 14.4665% ( 243) 00:09:35.367 11.850 - 11.906: 17.2975% ( 273) 00:09:35.367 11.906 - 11.962: 20.4604% ( 305) 00:09:35.367 11.962 - 12.017: 23.7685% ( 319) 00:09:35.367 12.017 - 12.073: 27.1077% ( 322) 00:09:35.367 12.073 - 12.129: 30.3640% ( 314) 00:09:35.367 12.129 - 12.185: 33.5684% ( 309) 00:09:35.367 12.185 - 12.241: 36.9906% ( 330) 00:09:35.367 12.241 - 12.297: 40.4127% ( 330) 00:09:35.367 12.297 - 12.353: 44.0942% ( 355) 00:09:35.367 12.353 - 12.409: 47.4645% ( 325) 00:09:35.367 12.409 - 12.465: 50.6689% ( 309) 00:09:35.367 12.465 - 12.521: 54.0288% ( 324) 00:09:35.367 12.521 - 12.576: 57.0258% ( 289) 00:09:35.367 12.576 - 12.632: 59.9399% ( 281) 00:09:35.367 12.632 - 12.688: 62.6880% ( 265) 00:09:35.367 12.688 - 12.744: 65.2701% ( 249) 00:09:35.367 12.744 - 12.800: 67.1886% ( 185) 00:09:35.367 12.800 - 12.856: 69.2212% ( 196) 00:09:35.367 12.856 - 12.912: 71.0671% ( 178) 00:09:35.367 12.912 - 12.968: 72.5397% ( 142) 00:09:35.367 12.968 - 13.024: 73.9500% ( 136) 00:09:35.367 13.024 - 13.079: 75.3811% ( 138) 00:09:35.367 13.079 - 13.135: 76.5841% ( 116) 00:09:35.367 13.135 - 13.191: 77.6211% ( 100) 00:09:35.367 13.191 - 13.247: 78.6062% ( 95) 00:09:35.367 13.247 - 13.303: 79.5707% ( 93) 00:09:35.367 13.303 - 13.359: 80.2033% ( 61) 00:09:35.367 13.359 - 13.415: 80.8981% ( 67) 00:09:35.367 13.415 - 13.471: 81.5306% ( 61) 00:09:35.367 13.471 - 13.527: 82.3292% ( 77) 00:09:35.367 13.527 - 13.583: 83.0862% ( 73) 00:09:35.367 13.583 - 13.638: 83.4595% ( 36) 00:09:35.367 13.638 - 13.694: 83.9780% ( 50) 00:09:35.367 13.694 - 13.750: 84.2995% ( 31) 00:09:35.367 13.750 - 13.806: 84.6624% ( 35) 00:09:35.367 13.806 - 13.862: 84.8699% ( 20) 00:09:35.367 13.862 - 13.918: 84.9943% ( 12) 00:09:35.367 13.918 - 13.974: 85.2224% ( 22) 00:09:35.367 13.974 - 14.030: 85.4610% ( 23) 00:09:35.367 14.030 - 14.086: 85.6372% ( 17) 00:09:35.367 14.086 - 14.141: 85.8032% ( 16) 00:09:35.367 14.141 - 14.197: 85.9898% ( 18) 00:09:35.367 14.197 - 14.253: 86.1039% ( 11) 00:09:35.367 14.253 - 14.309: 86.2284% ( 12) 00:09:35.367 14.309 - 14.421: 86.4565% ( 22) 00:09:35.367 14.421 - 14.533: 86.7158% ( 25) 00:09:35.367 14.533 - 14.645: 86.9646% ( 24) 00:09:35.367 14.645 - 14.756: 87.6387% ( 65) 00:09:35.367 14.756 - 14.868: 88.6861% ( 101) 00:09:35.367 14.868 - 14.980: 89.5365% ( 82) 00:09:35.367 14.980 - 15.092: 90.3350% ( 77) 00:09:35.367 15.092 - 15.203: 90.9779% ( 62) 00:09:35.367 15.203 - 15.315: 91.6623% ( 66) 00:09:35.367 15.315 - 15.427: 92.0046% ( 33) 00:09:35.367 15.427 - 15.539: 92.3468% ( 33) 00:09:35.367 15.539 - 15.651: 92.7097% ( 35) 00:09:35.367 15.651 - 15.762: 93.0623% ( 34) 00:09:35.367 15.762 - 15.874: 93.3423% ( 27) 00:09:35.367 15.874 - 15.986: 93.6534% ( 30) 00:09:35.367 15.986 - 16.098: 94.0371% ( 37) 00:09:35.367 16.098 - 16.210: 94.3793% ( 33) 00:09:35.367 16.210 - 16.321: 94.5556% ( 17) 00:09:35.367 16.321 - 16.433: 94.7838% ( 22) 00:09:35.367 16.433 - 16.545: 95.0119% ( 22) 00:09:35.367 16.545 - 16.657: 95.1778% ( 16) 00:09:35.367 16.657 - 16.769: 95.4475% ( 26) 00:09:35.367 16.769 - 16.880: 95.6445% ( 19) 00:09:35.367 16.880 - 16.992: 95.7897% ( 14) 00:09:35.367 16.992 - 17.104: 95.9038% ( 11) 00:09:35.367 17.104 - 17.216: 96.0489% ( 14) 00:09:35.367 17.216 - 17.328: 96.1941% ( 14) 00:09:35.367 17.328 - 17.439: 96.4119% ( 21) 00:09:35.367 17.439 - 17.551: 96.5052% ( 9) 00:09:35.367 17.551 - 17.663: 96.6400% ( 13) 00:09:35.367 17.663 - 17.775: 96.7230% ( 8) 00:09:35.367 17.775 - 17.886: 96.7956% ( 7) 00:09:35.367 17.886 - 17.998: 96.8786% ( 8) 00:09:35.367 17.998 - 18.110: 97.0030% ( 12) 00:09:35.367 18.110 - 18.222: 97.0756% ( 7) 00:09:35.367 18.222 - 18.334: 97.1689% ( 9) 00:09:35.367 18.334 - 18.445: 97.2726% ( 10) 00:09:35.367 18.445 - 18.557: 97.3660% ( 9) 00:09:35.367 18.557 - 18.669: 97.5215% ( 15) 00:09:35.367 18.669 - 18.781: 97.7186% ( 19) 00:09:35.367 18.781 - 18.893: 97.8015% ( 8) 00:09:35.367 18.893 - 19.004: 97.9363% ( 13) 00:09:35.367 19.004 - 19.116: 97.9882% ( 5) 00:09:35.367 19.116 - 19.228: 98.1230% ( 13) 00:09:35.367 19.228 - 19.340: 98.1852% ( 6) 00:09:35.367 19.340 - 19.452: 98.2060% ( 2) 00:09:35.367 19.452 - 19.563: 98.2474% ( 4) 00:09:35.367 19.563 - 19.675: 98.2993% ( 5) 00:09:35.367 19.675 - 19.787: 98.3304% ( 3) 00:09:35.367 19.787 - 19.899: 98.3926% ( 6) 00:09:35.367 19.899 - 20.010: 98.4134% ( 2) 00:09:35.367 20.010 - 20.122: 98.4341% ( 2) 00:09:35.367 20.122 - 20.234: 98.4652% ( 3) 00:09:35.367 20.234 - 20.346: 98.4756% ( 1) 00:09:35.367 20.346 - 20.458: 98.5171% ( 4) 00:09:35.367 20.458 - 20.569: 98.5378% ( 2) 00:09:35.367 20.569 - 20.681: 98.5897% ( 5) 00:09:35.367 20.681 - 20.793: 98.6208% ( 3) 00:09:35.367 20.793 - 20.905: 98.6415% ( 2) 00:09:35.367 20.905 - 21.017: 98.6726% ( 3) 00:09:35.367 21.017 - 21.128: 98.6934% ( 2) 00:09:35.367 21.128 - 21.240: 98.7037% ( 1) 00:09:35.367 21.240 - 21.352: 98.7245% ( 2) 00:09:35.367 21.352 - 21.464: 98.7452% ( 2) 00:09:35.367 21.687 - 21.799: 98.7659% ( 2) 00:09:35.367 21.799 - 21.911: 98.7763% ( 1) 00:09:35.367 21.911 - 22.023: 98.7867% ( 1) 00:09:35.367 22.134 - 22.246: 98.7971% ( 1) 00:09:35.367 22.246 - 22.358: 98.8178% ( 2) 00:09:35.367 22.358 - 22.470: 98.8385% ( 2) 00:09:35.367 22.470 - 22.582: 98.8696% ( 3) 00:09:35.367 22.582 - 22.693: 98.8800% ( 1) 00:09:35.367 22.693 - 22.805: 98.9008% ( 2) 00:09:35.367 22.805 - 22.917: 98.9215% ( 2) 00:09:35.367 22.917 - 23.029: 98.9837% ( 6) 00:09:35.367 23.029 - 23.141: 99.0045% ( 2) 00:09:35.367 23.141 - 23.252: 99.0459% ( 4) 00:09:35.367 23.252 - 23.364: 99.0874% ( 4) 00:09:35.367 23.364 - 23.476: 99.1082% ( 2) 00:09:35.367 23.476 - 23.588: 99.1393% ( 3) 00:09:35.367 23.588 - 23.700: 99.1808% ( 4) 00:09:35.367 23.700 - 23.811: 99.2222% ( 4) 00:09:35.367 23.811 - 23.923: 99.2430% ( 2) 00:09:35.367 23.923 - 24.035: 99.2845% ( 4) 00:09:35.367 24.035 - 24.147: 99.2948% ( 1) 00:09:35.367 24.147 - 24.259: 99.3259% ( 3) 00:09:35.367 24.259 - 24.370: 99.3363% ( 1) 00:09:35.367 24.370 - 24.482: 99.3467% ( 1) 00:09:35.367 24.482 - 24.594: 99.3570% ( 1) 00:09:35.367 24.594 - 24.706: 99.3674% ( 1) 00:09:35.367 24.817 - 24.929: 99.3778% ( 1) 00:09:35.367 24.929 - 25.041: 99.3882% ( 1) 00:09:35.367 25.041 - 25.153: 99.3985% ( 1) 00:09:35.367 25.153 - 25.265: 99.4193% ( 2) 00:09:35.367 25.488 - 25.600: 99.4400% ( 2) 00:09:35.367 25.600 - 25.712: 99.4607% ( 2) 00:09:35.367 25.712 - 25.824: 99.4711% ( 1) 00:09:35.367 25.824 - 25.935: 99.4919% ( 2) 00:09:35.367 25.935 - 26.047: 99.5022% ( 1) 00:09:35.367 26.047 - 26.159: 99.5230% ( 2) 00:09:35.367 26.271 - 26.383: 99.5333% ( 1) 00:09:35.367 26.830 - 26.941: 99.5437% ( 1) 00:09:35.367 27.165 - 27.277: 99.5541% ( 1) 00:09:35.367 27.389 - 27.500: 99.5748% ( 2) 00:09:35.367 27.612 - 27.724: 99.5956% ( 2) 00:09:35.367 27.948 - 28.059: 99.6370% ( 4) 00:09:35.367 28.059 - 28.171: 99.6682% ( 3) 00:09:35.367 28.171 - 28.283: 99.6889% ( 2) 00:09:35.367 28.283 - 28.395: 99.6993% ( 1) 00:09:35.367 28.395 - 28.507: 99.7407% ( 4) 00:09:35.367 28.618 - 28.842: 99.7926% ( 5) 00:09:35.367 28.842 - 29.066: 99.8444% ( 5) 00:09:35.367 29.066 - 29.289: 99.8548% ( 1) 00:09:35.367 29.289 - 29.513: 99.8756% ( 2) 00:09:35.367 29.513 - 29.736: 99.8859% ( 1) 00:09:35.367 30.183 - 30.407: 99.8963% ( 1) 00:09:35.367 31.301 - 31.525: 99.9274% ( 3) 00:09:35.367 31.525 - 31.748: 99.9378% ( 1) 00:09:35.367 31.972 - 32.196: 99.9481% ( 1) 00:09:35.367 32.196 - 32.419: 99.9585% ( 1) 00:09:35.367 33.537 - 33.761: 99.9689% ( 1) 00:09:35.367 33.984 - 34.208: 99.9793% ( 1) 00:09:35.367 35.549 - 35.773: 99.9896% ( 1) 00:09:35.367 110.896 - 111.343: 100.0000% ( 1) 00:09:35.367 00:09:35.367 Complete histogram 00:09:35.367 ================== 00:09:35.367 Range in us Cumulative Count 00:09:35.367 6.148 - 6.176: 0.0104% ( 1) 00:09:35.367 6.176 - 6.204: 0.0311% ( 2) 00:09:35.367 6.204 - 6.232: 0.0726% ( 4) 00:09:35.367 6.232 - 6.260: 0.0933% ( 2) 00:09:35.367 6.260 - 6.288: 0.1348% ( 4) 00:09:35.367 6.288 - 6.316: 0.2385% ( 10) 00:09:35.367 6.316 - 6.344: 0.3630% ( 12) 00:09:35.367 6.344 - 6.372: 0.5185% ( 15) 00:09:35.367 6.372 - 6.400: 0.6430% ( 12) 00:09:35.367 6.400 - 6.428: 0.8918% ( 24) 00:09:35.367 6.428 - 6.456: 1.0267% ( 13) 00:09:35.367 6.456 - 6.484: 1.1304% ( 10) 00:09:35.367 6.484 - 6.512: 1.2133% ( 8) 00:09:35.367 6.512 - 6.540: 1.3481% ( 13) 00:09:35.367 6.540 - 6.568: 1.4726% ( 12) 00:09:35.367 6.568 - 6.596: 1.7526% ( 27) 00:09:35.367 6.596 - 6.624: 2.1259% ( 36) 00:09:35.367 6.624 - 6.652: 2.6340% ( 49) 00:09:35.367 6.652 - 6.679: 3.5674% ( 90) 00:09:35.367 6.679 - 6.707: 4.9155% ( 130) 00:09:35.367 6.707 - 6.735: 6.5643% ( 159) 00:09:35.367 6.735 - 6.763: 8.4725% ( 184) 00:09:35.367 6.763 - 6.791: 10.3495% ( 181) 00:09:35.367 6.791 - 6.819: 12.6206% ( 219) 00:09:35.367 6.819 - 6.847: 14.4872% ( 180) 00:09:35.367 6.847 - 6.875: 16.4161% ( 186) 00:09:35.367 6.875 - 6.903: 18.2101% ( 173) 00:09:35.368 6.903 - 6.931: 19.8797% ( 161) 00:09:35.368 6.931 - 6.959: 21.6323% ( 169) 00:09:35.368 6.959 - 6.987: 23.4574% ( 176) 00:09:35.368 6.987 - 7.015: 25.2515% ( 173) 00:09:35.368 7.015 - 7.043: 26.7240% ( 142) 00:09:35.368 7.043 - 7.071: 28.3833% ( 160) 00:09:35.368 7.071 - 7.099: 29.7418% ( 131) 00:09:35.368 7.099 - 7.127: 30.9655% ( 118) 00:09:35.368 7.127 - 7.155: 32.1684% ( 116) 00:09:35.368 7.155 - 7.210: 34.4706% ( 222) 00:09:35.368 7.210 - 7.266: 37.1461% ( 258) 00:09:35.368 7.266 - 7.322: 40.3713% ( 311) 00:09:35.368 7.322 - 7.378: 43.9075% ( 341) 00:09:35.368 7.378 - 7.434: 48.3045% ( 424) 00:09:35.368 7.434 - 7.490: 53.4585% ( 497) 00:09:35.368 7.490 - 7.546: 58.3636% ( 473) 00:09:35.368 7.546 - 7.602: 62.8643% ( 434) 00:09:35.368 7.602 - 7.658: 66.8257% ( 382) 00:09:35.368 7.658 - 7.714: 69.5116% ( 259) 00:09:35.368 7.714 - 7.769: 71.8449% ( 225) 00:09:35.368 7.769 - 7.825: 73.6700% ( 176) 00:09:35.368 7.825 - 7.881: 75.1944% ( 147) 00:09:35.368 7.881 - 7.937: 76.7189% ( 147) 00:09:35.368 7.937 - 7.993: 77.7455% ( 99) 00:09:35.368 7.993 - 8.049: 78.6166% ( 84) 00:09:35.368 8.049 - 8.105: 79.3322% ( 69) 00:09:35.368 8.105 - 8.161: 80.5040% ( 113) 00:09:35.368 8.161 - 8.217: 81.3232% ( 79) 00:09:35.368 8.217 - 8.272: 82.2151% ( 86) 00:09:35.368 8.272 - 8.328: 82.9825% ( 74) 00:09:35.368 8.328 - 8.384: 83.5632% ( 56) 00:09:35.368 8.384 - 8.440: 83.9988% ( 42) 00:09:35.368 8.440 - 8.496: 84.5069% ( 49) 00:09:35.368 8.496 - 8.552: 84.7662% ( 25) 00:09:35.368 8.552 - 8.608: 85.1187% ( 34) 00:09:35.368 8.608 - 8.664: 85.5543% ( 42) 00:09:35.368 8.664 - 8.720: 85.7928% ( 23) 00:09:35.368 8.720 - 8.776: 86.1247% ( 32) 00:09:35.368 8.776 - 8.831: 86.3528% ( 22) 00:09:35.368 8.831 - 8.887: 86.5809% ( 22) 00:09:35.368 8.887 - 8.943: 86.7780% ( 19) 00:09:35.368 8.943 - 8.999: 86.9957% ( 21) 00:09:35.368 8.999 - 9.055: 87.2965% ( 29) 00:09:35.368 9.055 - 9.111: 87.5557% ( 25) 00:09:35.368 9.111 - 9.167: 87.6906% ( 13) 00:09:35.368 9.167 - 9.223: 87.7943% ( 10) 00:09:35.368 9.223 - 9.279: 88.0017% ( 20) 00:09:35.368 9.279 - 9.334: 88.1780% ( 17) 00:09:35.368 9.334 - 9.390: 88.3750% ( 19) 00:09:35.368 9.390 - 9.446: 88.7172% ( 33) 00:09:35.368 9.446 - 9.502: 89.5468% ( 80) 00:09:35.368 9.502 - 9.558: 90.6357% ( 105) 00:09:35.368 9.558 - 9.614: 91.8179% ( 114) 00:09:35.368 9.614 - 9.670: 92.8653% ( 101) 00:09:35.368 9.670 - 9.726: 93.5705% ( 68) 00:09:35.368 9.726 - 9.782: 94.2653% ( 67) 00:09:35.368 9.782 - 9.838: 94.5453% ( 27) 00:09:35.368 9.838 - 9.893: 94.8149% ( 26) 00:09:35.368 9.893 - 9.949: 95.0949% ( 27) 00:09:35.368 9.949 - 10.005: 95.3127% ( 21) 00:09:35.368 10.005 - 10.061: 95.5408% ( 22) 00:09:35.368 10.061 - 10.117: 95.7067% ( 16) 00:09:35.368 10.117 - 10.173: 95.8208% ( 11) 00:09:35.368 10.173 - 10.229: 95.9764% ( 15) 00:09:35.368 10.229 - 10.285: 96.1215% ( 14) 00:09:35.368 10.285 - 10.341: 96.2667% ( 14) 00:09:35.368 10.341 - 10.397: 96.3186% ( 5) 00:09:35.368 10.397 - 10.452: 96.3912% ( 7) 00:09:35.368 10.452 - 10.508: 96.5052% ( 11) 00:09:35.368 10.508 - 10.564: 96.5467% ( 4) 00:09:35.368 10.564 - 10.620: 96.5986% ( 5) 00:09:35.368 10.620 - 10.676: 96.6712% ( 7) 00:09:35.368 10.676 - 10.732: 96.7334% ( 6) 00:09:35.368 10.732 - 10.788: 96.7749% ( 4) 00:09:35.368 10.788 - 10.844: 96.8163% ( 4) 00:09:35.368 10.844 - 10.900: 96.8786% ( 6) 00:09:35.368 10.900 - 10.955: 96.9304% ( 5) 00:09:35.368 11.011 - 11.067: 96.9719% ( 4) 00:09:35.368 11.067 - 11.123: 97.0445% ( 7) 00:09:35.368 11.123 - 11.179: 97.0756% ( 3) 00:09:35.368 11.179 - 11.235: 97.1067% ( 3) 00:09:35.368 11.235 - 11.291: 97.1274% ( 2) 00:09:35.368 11.291 - 11.347: 97.1586% ( 3) 00:09:35.368 11.347 - 11.403: 97.1689% ( 1) 00:09:35.368 11.459 - 11.514: 97.1793% ( 1) 00:09:35.368 11.514 - 11.570: 97.2104% ( 3) 00:09:35.368 11.626 - 11.682: 97.2312% ( 2) 00:09:35.368 11.682 - 11.738: 97.2519% ( 2) 00:09:35.368 11.738 - 11.794: 97.2726% ( 2) 00:09:35.368 11.794 - 11.850: 97.2830% ( 1) 00:09:35.368 11.850 - 11.906: 97.3349% ( 5) 00:09:35.368 11.906 - 11.962: 97.3452% ( 1) 00:09:35.368 12.017 - 12.073: 97.3556% ( 1) 00:09:35.368 12.129 - 12.185: 97.3660% ( 1) 00:09:35.368 12.185 - 12.241: 97.3867% ( 2) 00:09:35.368 12.241 - 12.297: 97.4489% ( 6) 00:09:35.368 12.297 - 12.353: 97.4593% ( 1) 00:09:35.368 12.353 - 12.409: 97.4904% ( 3) 00:09:35.368 12.409 - 12.465: 97.5215% ( 3) 00:09:35.368 12.521 - 12.576: 97.5319% ( 1) 00:09:35.368 12.576 - 12.632: 97.5630% ( 3) 00:09:35.368 12.632 - 12.688: 97.6045% ( 4) 00:09:35.368 12.688 - 12.744: 97.6252% ( 2) 00:09:35.368 12.744 - 12.800: 97.6667% ( 4) 00:09:35.368 12.800 - 12.856: 97.6978% ( 3) 00:09:35.368 12.856 - 12.912: 97.7289% ( 3) 00:09:35.368 12.912 - 12.968: 97.7808% ( 5) 00:09:35.368 12.968 - 13.024: 97.8326% ( 5) 00:09:35.368 13.024 - 13.079: 97.8534% ( 2) 00:09:35.368 13.079 - 13.135: 97.8741% ( 2) 00:09:35.368 13.135 - 13.191: 97.9052% ( 3) 00:09:35.368 13.191 - 13.247: 97.9260% ( 2) 00:09:35.368 13.247 - 13.303: 97.9363% ( 1) 00:09:35.368 13.303 - 13.359: 97.9778% ( 4) 00:09:35.368 13.359 - 13.415: 97.9985% ( 2) 00:09:35.368 13.415 - 13.471: 98.0504% ( 5) 00:09:35.368 13.471 - 13.527: 98.0815% ( 3) 00:09:35.368 13.527 - 13.583: 98.1437% ( 6) 00:09:35.368 13.583 - 13.638: 98.1748% ( 3) 00:09:35.368 13.638 - 13.694: 98.1956% ( 2) 00:09:35.368 13.694 - 13.750: 98.2267% ( 3) 00:09:35.368 13.750 - 13.806: 98.2371% ( 1) 00:09:35.368 13.806 - 13.862: 98.2785% ( 4) 00:09:35.368 13.862 - 13.918: 98.2889% ( 1) 00:09:35.368 13.918 - 13.974: 98.3200% ( 3) 00:09:35.368 13.974 - 14.030: 98.3408% ( 2) 00:09:35.368 14.030 - 14.086: 98.3719% ( 3) 00:09:35.368 14.086 - 14.141: 98.3822% ( 1) 00:09:35.368 14.141 - 14.197: 98.4030% ( 2) 00:09:35.368 14.253 - 14.309: 98.4652% ( 6) 00:09:35.368 14.309 - 14.421: 98.5274% ( 6) 00:09:35.368 14.421 - 14.533: 98.5482% ( 2) 00:09:35.368 14.533 - 14.645: 98.5689% ( 2) 00:09:35.368 14.645 - 14.756: 98.5897% ( 2) 00:09:35.368 14.756 - 14.868: 98.6104% ( 2) 00:09:35.368 14.868 - 14.980: 98.6208% ( 1) 00:09:35.368 14.980 - 15.092: 98.6415% ( 2) 00:09:35.368 15.203 - 15.315: 98.6622% ( 2) 00:09:35.368 15.315 - 15.427: 98.6726% ( 1) 00:09:35.368 15.427 - 15.539: 98.6934% ( 2) 00:09:35.368 15.539 - 15.651: 98.7452% ( 5) 00:09:35.368 15.651 - 15.762: 98.7867% ( 4) 00:09:35.368 15.874 - 15.986: 98.8178% ( 3) 00:09:35.368 15.986 - 16.098: 98.8385% ( 2) 00:09:35.368 16.098 - 16.210: 98.8593% ( 2) 00:09:35.368 16.210 - 16.321: 98.8696% ( 1) 00:09:35.368 16.321 - 16.433: 98.8800% ( 1) 00:09:35.368 16.545 - 16.657: 98.8904% ( 1) 00:09:35.368 16.657 - 16.769: 98.9111% ( 2) 00:09:35.368 16.769 - 16.880: 98.9215% ( 1) 00:09:35.368 17.216 - 17.328: 98.9319% ( 1) 00:09:35.368 17.328 - 17.439: 98.9526% ( 2) 00:09:35.368 17.439 - 17.551: 98.9837% ( 3) 00:09:35.368 17.551 - 17.663: 99.0356% ( 5) 00:09:35.368 17.663 - 17.775: 99.0459% ( 1) 00:09:35.368 17.775 - 17.886: 99.0667% ( 2) 00:09:35.368 18.110 - 18.222: 99.0874% ( 2) 00:09:35.368 18.222 - 18.334: 99.1496% ( 6) 00:09:35.368 18.334 - 18.445: 99.1704% ( 2) 00:09:35.368 18.445 - 18.557: 99.2119% ( 4) 00:09:35.368 18.557 - 18.669: 99.2430% ( 3) 00:09:35.368 18.669 - 18.781: 99.2533% ( 1) 00:09:35.368 18.781 - 18.893: 99.2845% ( 3) 00:09:35.368 18.893 - 19.004: 99.3259% ( 4) 00:09:35.368 19.004 - 19.116: 99.3467% ( 2) 00:09:35.368 19.116 - 19.228: 99.3570% ( 1) 00:09:35.368 19.228 - 19.340: 99.3674% ( 1) 00:09:35.368 19.340 - 19.452: 99.3882% ( 2) 00:09:35.368 19.452 - 19.563: 99.4089% ( 2) 00:09:35.368 19.787 - 19.899: 99.4193% ( 1) 00:09:35.368 19.899 - 20.010: 99.4296% ( 1) 00:09:35.368 20.122 - 20.234: 99.4400% ( 1) 00:09:35.368 20.234 - 20.346: 99.4504% ( 1) 00:09:35.368 20.346 - 20.458: 99.4607% ( 1) 00:09:35.368 20.569 - 20.681: 99.4711% ( 1) 00:09:35.368 21.017 - 21.128: 99.4815% ( 1) 00:09:35.368 21.464 - 21.576: 99.4919% ( 1) 00:09:35.368 21.576 - 21.687: 99.5022% ( 1) 00:09:35.368 21.687 - 21.799: 99.5126% ( 1) 00:09:35.368 21.799 - 21.911: 99.5230% ( 1) 00:09:35.368 21.911 - 22.023: 99.5333% ( 1) 00:09:35.368 22.470 - 22.582: 99.5437% ( 1) 00:09:35.368 22.582 - 22.693: 99.5645% ( 2) 00:09:35.368 22.805 - 22.917: 99.5956% ( 3) 00:09:35.368 22.917 - 23.029: 99.6267% ( 3) 00:09:35.368 23.141 - 23.252: 99.6474% ( 2) 00:09:35.368 23.252 - 23.364: 99.6578% ( 1) 00:09:35.368 23.364 - 23.476: 99.6785% ( 2) 00:09:35.368 23.476 - 23.588: 99.6889% ( 1) 00:09:35.368 23.700 - 23.811: 99.6993% ( 1) 00:09:35.368 23.811 - 23.923: 99.7407% ( 4) 00:09:35.368 23.923 - 24.035: 99.7615% ( 2) 00:09:35.368 24.035 - 24.147: 99.8030% ( 4) 00:09:35.368 24.147 - 24.259: 99.8133% ( 1) 00:09:35.368 24.259 - 24.370: 99.8237% ( 1) 00:09:35.368 24.594 - 24.706: 99.8341% ( 1) 00:09:35.368 24.706 - 24.817: 99.8444% ( 1) 00:09:35.368 24.817 - 24.929: 99.8652% ( 2) 00:09:35.368 25.041 - 25.153: 99.8756% ( 1) 00:09:35.368 25.153 - 25.265: 99.8859% ( 1) 00:09:35.368 25.488 - 25.600: 99.8963% ( 1) 00:09:35.368 26.159 - 26.271: 99.9067% ( 1) 00:09:35.368 26.383 - 26.494: 99.9170% ( 1) 00:09:35.368 29.513 - 29.736: 99.9274% ( 1) 00:09:35.368 29.736 - 29.960: 99.9378% ( 1) 00:09:35.368 31.078 - 31.301: 99.9481% ( 1) 00:09:35.368 34.655 - 34.879: 99.9585% ( 1) 00:09:35.368 35.997 - 36.220: 99.9689% ( 1) 00:09:35.368 36.891 - 37.114: 99.9793% ( 1) 00:09:35.368 49.859 - 50.082: 99.9896% ( 1) 00:09:35.368 651.067 - 654.645: 100.0000% ( 1) 00:09:35.368 00:09:35.368 00:09:35.368 real 0m1.263s 00:09:35.368 user 0m1.074s 00:09:35.368 sys 0m0.137s 00:09:35.368 11:51:33 nvme.nvme_overhead -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:35.368 11:51:33 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:35.368 ************************************ 00:09:35.368 END TEST nvme_overhead 00:09:35.368 ************************************ 00:09:35.368 11:51:34 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:35.368 11:51:34 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:09:35.368 11:51:34 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:35.368 11:51:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:35.368 ************************************ 00:09:35.368 START TEST nvme_arbitration 00:09:35.368 ************************************ 00:09:35.368 11:51:34 nvme.nvme_arbitration -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:38.673 Initializing NVMe Controllers 00:09:38.673 Attached to 0000:00:10.0 00:09:38.673 Attached to 0000:00:11.0 00:09:38.673 Attached to 0000:00:13.0 00:09:38.673 Attached to 0000:00:12.0 00:09:38.673 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:38.673 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:38.673 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:38.673 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:38.673 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:38.673 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:38.673 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:38.673 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:38.673 Initialization complete. Launching workers. 00:09:38.673 Starting thread on core 1 with urgent priority queue 00:09:38.673 Starting thread on core 2 with urgent priority queue 00:09:38.673 Starting thread on core 3 with urgent priority queue 00:09:38.673 Starting thread on core 0 with urgent priority queue 00:09:38.673 QEMU NVMe Ctrl (12340 ) core 0: 3626.67 IO/s 27.57 secs/100000 ios 00:09:38.673 QEMU NVMe Ctrl (12342 ) core 0: 3626.67 IO/s 27.57 secs/100000 ios 00:09:38.673 QEMU NVMe Ctrl (12341 ) core 1: 3712.00 IO/s 26.94 secs/100000 ios 00:09:38.673 QEMU NVMe Ctrl (12342 ) core 1: 3712.00 IO/s 26.94 secs/100000 ios 00:09:38.673 QEMU NVMe Ctrl (12343 ) core 2: 3882.67 IO/s 25.76 secs/100000 ios 00:09:38.673 QEMU NVMe Ctrl (12342 ) core 3: 3818.67 IO/s 26.19 secs/100000 ios 00:09:38.673 ======================================================== 00:09:38.673 00:09:38.673 00:09:38.673 real 0m3.277s 00:09:38.673 user 0m9.030s 00:09:38.673 sys 0m0.159s 00:09:38.673 11:51:37 nvme.nvme_arbitration -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:38.673 11:51:37 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:38.673 ************************************ 00:09:38.673 END TEST nvme_arbitration 00:09:38.673 ************************************ 00:09:38.673 11:51:37 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:38.673 11:51:37 nvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:09:38.673 11:51:37 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:38.673 11:51:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:38.673 ************************************ 00:09:38.673 START TEST nvme_single_aen 00:09:38.673 ************************************ 00:09:38.673 11:51:37 nvme.nvme_single_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:38.932 Asynchronous Event Request test 00:09:38.932 Attached to 0000:00:10.0 00:09:38.932 Attached to 0000:00:11.0 00:09:38.932 Attached to 0000:00:13.0 00:09:38.932 Attached to 0000:00:12.0 00:09:38.932 Reset controller to setup AER completions for this process 00:09:38.932 Registering asynchronous event callbacks... 00:09:38.932 Getting orig temperature thresholds of all controllers 00:09:38.932 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:38.932 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:38.932 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:38.932 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:38.932 Setting all controllers temperature threshold low to trigger AER 00:09:38.932 Waiting for all controllers temperature threshold to be set lower 00:09:38.932 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:38.932 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:38.932 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:38.932 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:38.932 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:38.932 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:38.932 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:38.932 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:38.932 Waiting for all controllers to trigger AER and reset threshold 00:09:38.932 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:38.932 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:38.932 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:38.932 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:38.932 Cleaning up... 00:09:38.932 00:09:38.932 real 0m0.295s 00:09:38.932 user 0m0.092s 00:09:38.932 sys 0m0.151s 00:09:38.932 11:51:37 nvme.nvme_single_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:38.932 11:51:37 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:38.932 ************************************ 00:09:38.932 END TEST nvme_single_aen 00:09:38.932 ************************************ 00:09:38.932 11:51:37 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:38.932 11:51:37 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:09:38.932 11:51:37 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:38.932 11:51:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:38.932 ************************************ 00:09:38.932 START TEST nvme_doorbell_aers 00:09:38.932 ************************************ 00:09:38.932 11:51:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1121 -- # nvme_doorbell_aers 00:09:38.932 11:51:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:38.932 11:51:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:38.932 11:51:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:38.932 11:51:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:38.932 11:51:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:38.932 11:51:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1509 -- # local bdfs 00:09:38.932 11:51:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:38.932 11:51:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:38.932 11:51:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:09:39.191 11:51:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:09:39.191 11:51:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:39.191 11:51:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:39.191 11:51:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:39.450 [2024-07-21 11:51:38.063169] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:09:49.418 Executing: test_write_invalid_db 00:09:49.418 Waiting for AER completion... 00:09:49.418 Failure: test_write_invalid_db 00:09:49.418 00:09:49.418 Executing: test_invalid_db_write_overflow_sq 00:09:49.418 Waiting for AER completion... 00:09:49.418 Failure: test_invalid_db_write_overflow_sq 00:09:49.418 00:09:49.418 Executing: test_invalid_db_write_overflow_cq 00:09:49.418 Waiting for AER completion... 00:09:49.418 Failure: test_invalid_db_write_overflow_cq 00:09:49.418 00:09:49.418 11:51:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:49.418 11:51:47 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:49.418 [2024-07-21 11:51:48.125856] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:09:59.383 Executing: test_write_invalid_db 00:09:59.383 Waiting for AER completion... 00:09:59.383 Failure: test_write_invalid_db 00:09:59.383 00:09:59.383 Executing: test_invalid_db_write_overflow_sq 00:09:59.383 Waiting for AER completion... 00:09:59.383 Failure: test_invalid_db_write_overflow_sq 00:09:59.383 00:09:59.383 Executing: test_invalid_db_write_overflow_cq 00:09:59.383 Waiting for AER completion... 00:09:59.383 Failure: test_invalid_db_write_overflow_cq 00:09:59.383 00:09:59.383 11:51:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:59.383 11:51:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:59.383 [2024-07-21 11:51:58.210210] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:10:09.346 Executing: test_write_invalid_db 00:10:09.346 Waiting for AER completion... 00:10:09.346 Failure: test_write_invalid_db 00:10:09.346 00:10:09.346 Executing: test_invalid_db_write_overflow_sq 00:10:09.346 Waiting for AER completion... 00:10:09.346 Failure: test_invalid_db_write_overflow_sq 00:10:09.346 00:10:09.346 Executing: test_invalid_db_write_overflow_cq 00:10:09.346 Waiting for AER completion... 00:10:09.346 Failure: test_invalid_db_write_overflow_cq 00:10:09.346 00:10:09.346 11:52:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:09.346 11:52:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:09.604 [2024-07-21 11:52:08.241793] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:10:19.571 Executing: test_write_invalid_db 00:10:19.571 Waiting for AER completion... 00:10:19.571 Failure: test_write_invalid_db 00:10:19.571 00:10:19.571 Executing: test_invalid_db_write_overflow_sq 00:10:19.571 Waiting for AER completion... 00:10:19.571 Failure: test_invalid_db_write_overflow_sq 00:10:19.571 00:10:19.571 Executing: test_invalid_db_write_overflow_cq 00:10:19.571 Waiting for AER completion... 00:10:19.571 Failure: test_invalid_db_write_overflow_cq 00:10:19.571 00:10:19.571 00:10:19.571 real 0m40.293s 00:10:19.571 user 0m35.817s 00:10:19.571 sys 0m4.100s 00:10:19.571 11:52:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:19.571 11:52:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:19.571 ************************************ 00:10:19.571 END TEST nvme_doorbell_aers 00:10:19.571 ************************************ 00:10:19.571 11:52:18 nvme -- nvme/nvme.sh@97 -- # uname 00:10:19.571 11:52:18 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:19.571 11:52:18 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:19.571 11:52:18 nvme -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:10:19.571 11:52:18 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:19.571 11:52:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:19.571 ************************************ 00:10:19.571 START TEST nvme_multi_aen 00:10:19.571 ************************************ 00:10:19.571 11:52:18 nvme.nvme_multi_aen -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:19.571 [2024-07-21 11:52:18.264245] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:10:19.571 [2024-07-21 11:52:18.264341] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:10:19.571 [2024-07-21 11:52:18.264356] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:10:19.571 [2024-07-21 11:52:18.265469] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:10:19.571 [2024-07-21 11:52:18.265516] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:10:19.571 [2024-07-21 11:52:18.265530] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:10:19.571 [2024-07-21 11:52:18.266388] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:10:19.571 [2024-07-21 11:52:18.266431] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:10:19.571 [2024-07-21 11:52:18.266443] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:10:19.571 [2024-07-21 11:52:18.267287] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:10:19.571 [2024-07-21 11:52:18.267329] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:10:19.571 [2024-07-21 11:52:18.267343] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81100) is not found. Dropping the request. 00:10:19.571 Child process pid: 81621 00:10:19.830 [Child] Asynchronous Event Request test 00:10:19.830 [Child] Attached to 0000:00:10.0 00:10:19.830 [Child] Attached to 0000:00:11.0 00:10:19.830 [Child] Attached to 0000:00:13.0 00:10:19.830 [Child] Attached to 0000:00:12.0 00:10:19.830 [Child] Registering asynchronous event callbacks... 00:10:19.830 [Child] Getting orig temperature thresholds of all controllers 00:10:19.830 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.830 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.830 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.830 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.830 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:19.830 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.830 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.831 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.831 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.831 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.831 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.831 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.831 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.831 [Child] Cleaning up... 00:10:19.831 Asynchronous Event Request test 00:10:19.831 Attached to 0000:00:10.0 00:10:19.831 Attached to 0000:00:11.0 00:10:19.831 Attached to 0000:00:13.0 00:10:19.831 Attached to 0000:00:12.0 00:10:19.831 Reset controller to setup AER completions for this process 00:10:19.831 Registering asynchronous event callbacks... 00:10:19.831 Getting orig temperature thresholds of all controllers 00:10:19.831 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.831 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.831 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.831 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:19.831 Setting all controllers temperature threshold low to trigger AER 00:10:19.831 Waiting for all controllers temperature threshold to be set lower 00:10:19.831 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.831 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:19.831 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.831 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:19.831 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.831 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:19.831 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:19.831 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:19.831 Waiting for all controllers to trigger AER and reset threshold 00:10:19.831 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.831 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.831 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.831 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:19.831 Cleaning up... 00:10:19.831 00:10:19.831 real 0m0.519s 00:10:19.831 user 0m0.160s 00:10:19.831 sys 0m0.247s 00:10:19.831 11:52:18 nvme.nvme_multi_aen -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:19.831 11:52:18 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:19.831 ************************************ 00:10:19.831 END TEST nvme_multi_aen 00:10:19.831 ************************************ 00:10:19.831 11:52:18 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:19.831 11:52:18 nvme -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:10:19.831 11:52:18 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:19.831 11:52:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:19.831 ************************************ 00:10:19.831 START TEST nvme_startup 00:10:19.831 ************************************ 00:10:19.831 11:52:18 nvme.nvme_startup -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:20.089 Initializing NVMe Controllers 00:10:20.089 Attached to 0000:00:10.0 00:10:20.089 Attached to 0000:00:11.0 00:10:20.089 Attached to 0000:00:13.0 00:10:20.089 Attached to 0000:00:12.0 00:10:20.089 Initialization complete. 00:10:20.089 Time used:156333.000 (us). 00:10:20.089 00:10:20.089 real 0m0.237s 00:10:20.089 user 0m0.080s 00:10:20.089 sys 0m0.117s 00:10:20.089 11:52:18 nvme.nvme_startup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:20.089 11:52:18 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:20.089 ************************************ 00:10:20.089 END TEST nvme_startup 00:10:20.089 ************************************ 00:10:20.089 11:52:18 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:20.089 11:52:18 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:20.089 11:52:18 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:20.089 11:52:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:20.089 ************************************ 00:10:20.089 START TEST nvme_multi_secondary 00:10:20.089 ************************************ 00:10:20.089 11:52:18 nvme.nvme_multi_secondary -- common/autotest_common.sh@1121 -- # nvme_multi_secondary 00:10:20.089 11:52:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=81677 00:10:20.089 11:52:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=81678 00:10:20.089 11:52:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:20.089 11:52:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:20.089 11:52:18 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:23.375 Initializing NVMe Controllers 00:10:23.375 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:23.375 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:23.375 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:23.375 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:23.375 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:23.375 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:23.375 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:23.375 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:23.375 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:23.375 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:23.375 Initialization complete. Launching workers. 00:10:23.375 ======================================================== 00:10:23.375 Latency(us) 00:10:23.375 Device Information : IOPS MiB/s Average min max 00:10:23.375 PCIE (0000:00:10.0) NSID 1 from core 1: 5634.32 22.01 2837.13 801.92 8118.76 00:10:23.375 PCIE (0000:00:11.0) NSID 1 from core 1: 5634.32 22.01 2838.88 839.33 9182.00 00:10:23.375 PCIE (0000:00:13.0) NSID 1 from core 1: 5634.32 22.01 2838.66 864.80 8595.73 00:10:23.375 PCIE (0000:00:12.0) NSID 1 from core 1: 5634.32 22.01 2838.63 843.71 7755.60 00:10:23.375 PCIE (0000:00:12.0) NSID 2 from core 1: 5634.32 22.01 2838.58 850.90 7534.52 00:10:23.375 PCIE (0000:00:12.0) NSID 3 from core 1: 5639.65 22.03 2835.93 842.22 8541.93 00:10:23.375 ======================================================== 00:10:23.375 Total : 33811.22 132.08 2837.97 801.92 9182.00 00:10:23.375 00:10:23.635 Initializing NVMe Controllers 00:10:23.635 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:23.635 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:23.635 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:23.635 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:23.635 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:23.635 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:23.635 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:23.635 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:23.635 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:23.635 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:23.635 Initialization complete. Launching workers. 00:10:23.635 ======================================================== 00:10:23.635 Latency(us) 00:10:23.635 Device Information : IOPS MiB/s Average min max 00:10:23.635 PCIE (0000:00:10.0) NSID 1 from core 2: 3053.76 11.93 5236.78 1158.73 14758.84 00:10:23.635 PCIE (0000:00:11.0) NSID 1 from core 2: 3053.76 11.93 5239.15 1241.43 14759.89 00:10:23.635 PCIE (0000:00:13.0) NSID 1 from core 2: 3053.76 11.93 5239.28 1263.48 15978.15 00:10:23.635 PCIE (0000:00:12.0) NSID 1 from core 2: 3053.76 11.93 5241.93 1206.86 15296.78 00:10:23.635 PCIE (0000:00:12.0) NSID 2 from core 2: 3053.76 11.93 5238.18 1157.77 19473.54 00:10:23.635 PCIE (0000:00:12.0) NSID 3 from core 2: 3053.76 11.93 5239.39 1132.67 15012.75 00:10:23.635 ======================================================== 00:10:23.635 Total : 18322.55 71.57 5239.12 1132.67 19473.54 00:10:23.635 00:10:23.895 11:52:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 81677 00:10:25.796 Initializing NVMe Controllers 00:10:25.796 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:25.796 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:25.796 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:25.796 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:25.796 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:25.796 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:25.796 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:25.796 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:25.796 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:25.796 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:25.796 Initialization complete. Launching workers. 00:10:25.796 ======================================================== 00:10:25.796 Latency(us) 00:10:25.796 Device Information : IOPS MiB/s Average min max 00:10:25.796 PCIE (0000:00:10.0) NSID 1 from core 0: 8937.49 34.91 1788.45 836.34 8141.09 00:10:25.796 PCIE (0000:00:11.0) NSID 1 from core 0: 8937.49 34.91 1789.68 863.60 7727.16 00:10:25.796 PCIE (0000:00:13.0) NSID 1 from core 0: 8937.49 34.91 1789.65 725.77 7871.23 00:10:25.796 PCIE (0000:00:12.0) NSID 1 from core 0: 8937.49 34.91 1789.60 636.58 7300.98 00:10:25.796 PCIE (0000:00:12.0) NSID 2 from core 0: 8937.49 34.91 1789.57 521.04 8006.83 00:10:25.796 PCIE (0000:00:12.0) NSID 3 from core 0: 8937.49 34.91 1789.53 423.52 8023.12 00:10:25.796 ======================================================== 00:10:25.796 Total : 53624.92 209.47 1789.41 423.52 8141.09 00:10:25.796 00:10:25.796 11:52:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 81678 00:10:25.796 11:52:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=81747 00:10:25.796 11:52:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=81748 00:10:25.796 11:52:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:25.796 11:52:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:25.796 11:52:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:29.082 Initializing NVMe Controllers 00:10:29.082 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:29.082 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:29.082 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:29.082 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:29.082 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:29.082 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:29.082 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:29.082 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:29.082 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:29.082 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:29.082 Initialization complete. Launching workers. 00:10:29.082 ======================================================== 00:10:29.082 Latency(us) 00:10:29.082 Device Information : IOPS MiB/s Average min max 00:10:29.082 PCIE (0000:00:10.0) NSID 1 from core 0: 5822.80 22.75 2745.30 891.37 7690.62 00:10:29.082 PCIE (0000:00:11.0) NSID 1 from core 0: 5822.80 22.75 2747.18 909.65 8285.10 00:10:29.082 PCIE (0000:00:13.0) NSID 1 from core 0: 5822.80 22.75 2747.28 901.55 8617.57 00:10:29.082 PCIE (0000:00:12.0) NSID 1 from core 0: 5822.80 22.75 2747.39 907.35 5899.23 00:10:29.082 PCIE (0000:00:12.0) NSID 2 from core 0: 5822.80 22.75 2747.36 922.78 6039.99 00:10:29.082 PCIE (0000:00:12.0) NSID 3 from core 0: 5828.13 22.77 2744.87 920.74 6591.83 00:10:29.082 ======================================================== 00:10:29.082 Total : 34942.14 136.49 2746.57 891.37 8617.57 00:10:29.082 00:10:29.082 Initializing NVMe Controllers 00:10:29.082 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:29.082 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:29.082 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:29.082 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:29.082 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:29.082 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:29.082 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:29.082 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:29.082 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:29.082 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:29.082 Initialization complete. Launching workers. 00:10:29.082 ======================================================== 00:10:29.082 Latency(us) 00:10:29.082 Device Information : IOPS MiB/s Average min max 00:10:29.082 PCIE (0000:00:10.0) NSID 1 from core 1: 5933.89 23.18 2693.89 839.42 6523.04 00:10:29.082 PCIE (0000:00:11.0) NSID 1 from core 1: 5933.89 23.18 2695.65 858.87 6841.87 00:10:29.082 PCIE (0000:00:13.0) NSID 1 from core 1: 5933.89 23.18 2695.61 867.94 7233.50 00:10:29.082 PCIE (0000:00:12.0) NSID 1 from core 1: 5933.89 23.18 2695.57 878.92 5975.96 00:10:29.082 PCIE (0000:00:12.0) NSID 2 from core 1: 5933.89 23.18 2695.50 870.39 5858.02 00:10:29.082 PCIE (0000:00:12.0) NSID 3 from core 1: 5933.89 23.18 2695.56 853.32 6231.72 00:10:29.082 ======================================================== 00:10:29.082 Total : 35603.31 139.08 2695.29 839.42 7233.50 00:10:29.082 00:10:30.997 Initializing NVMe Controllers 00:10:30.997 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:30.997 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:30.997 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:30.997 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:30.997 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:30.997 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:30.997 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:30.997 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:30.997 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:30.997 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:30.997 Initialization complete. Launching workers. 00:10:30.997 ======================================================== 00:10:30.997 Latency(us) 00:10:30.997 Device Information : IOPS MiB/s Average min max 00:10:30.997 PCIE (0000:00:10.0) NSID 1 from core 2: 3349.10 13.08 4775.03 924.60 13275.64 00:10:30.997 PCIE (0000:00:11.0) NSID 1 from core 2: 3349.10 13.08 4776.93 940.08 13394.04 00:10:30.997 PCIE (0000:00:13.0) NSID 1 from core 2: 3349.10 13.08 4776.81 952.64 16931.71 00:10:30.997 PCIE (0000:00:12.0) NSID 1 from core 2: 3349.10 13.08 4776.73 984.59 13372.72 00:10:30.997 PCIE (0000:00:12.0) NSID 2 from core 2: 3349.10 13.08 4776.89 984.39 13359.72 00:10:30.997 PCIE (0000:00:12.0) NSID 3 from core 2: 3349.10 13.08 4774.23 861.96 13193.31 00:10:30.997 ======================================================== 00:10:30.997 Total : 20094.61 78.49 4776.10 861.96 16931.71 00:10:30.997 00:10:30.997 11:52:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 81747 00:10:30.997 11:52:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 81748 00:10:30.997 00:10:30.997 real 0m10.668s 00:10:30.997 user 0m18.334s 00:10:30.997 sys 0m0.861s 00:10:30.997 11:52:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:30.997 11:52:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:30.997 ************************************ 00:10:30.997 END TEST nvme_multi_secondary 00:10:30.997 ************************************ 00:10:30.997 11:52:29 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:30.997 11:52:29 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:30.997 11:52:29 nvme -- common/autotest_common.sh@1085 -- # [[ -e /proc/80697 ]] 00:10:30.997 11:52:29 nvme -- common/autotest_common.sh@1086 -- # kill 80697 00:10:30.997 11:52:29 nvme -- common/autotest_common.sh@1087 -- # wait 80697 00:10:30.997 [2024-07-21 11:52:29.655152] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.655303] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.655349] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.655478] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.657067] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.657161] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.657206] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.657259] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.658732] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.658851] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.658929] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.658980] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.660455] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.660542] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.660590] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 [2024-07-21 11:52:29.660633] nvme_pcie_common.c: 293:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 81620) is not found. Dropping the request. 00:10:30.997 11:52:29 nvme -- common/autotest_common.sh@1089 -- # rm -f /var/run/spdk_stub0 00:10:30.997 11:52:29 nvme -- common/autotest_common.sh@1093 -- # echo 2 00:10:30.997 11:52:29 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:30.997 11:52:29 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:30.997 11:52:29 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:30.997 11:52:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:30.997 ************************************ 00:10:30.997 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:30.997 ************************************ 00:10:30.997 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:31.256 * Looking for test storage... 00:10:31.256 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # bdfs=() 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1520 -- # local bdfs 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:31.256 11:52:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=81902 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 81902 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@827 -- # '[' -z 81902 ']' 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:31.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:31.256 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:31.514 [2024-07-21 11:52:30.120634] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:10:31.514 [2024-07-21 11:52:30.120776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81902 ] 00:10:31.514 [2024-07-21 11:52:30.298349] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:31.514 [2024-07-21 11:52:30.349131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:31.514 [2024-07-21 11:52:30.349326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:31.514 [2024-07-21 11:52:30.349341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.514 [2024-07-21 11:52:30.349455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.453 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:32.453 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # return 0 00:10:32.453 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:32.453 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.453 11:52:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:32.453 nvme0n1 00:10:32.453 11:52:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.453 11:52:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:32.453 11:52:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_yauMw.txt 00:10:32.453 11:52:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:32.453 11:52:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:32.453 11:52:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:32.453 true 00:10:32.453 11:52:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:32.453 11:52:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:32.453 11:52:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721562751 00:10:32.453 11:52:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:32.453 11:52:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=81925 00:10:32.453 11:52:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:32.453 11:52:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:34.350 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:34.350 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.350 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:34.350 [2024-07-21 11:52:33.084733] nvme_ctrlr.c:1653:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:10:34.350 [2024-07-21 11:52:33.085139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:34.350 [2024-07-21 11:52:33.085213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:34.350 [2024-07-21 11:52:33.085268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:34.350 [2024-07-21 11:52:33.087049] bdev_nvme.c:2064:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:34.350 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.350 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 81925 00:10:34.350 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 81925 00:10:34.350 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 81925 00:10:34.350 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_yauMw.txt 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_yauMw.txt 00:10:34.351 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 81902 00:10:34.609 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@946 -- # '[' -z 81902 ']' 00:10:34.609 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # kill -0 81902 00:10:34.609 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # uname 00:10:34.609 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:34.609 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 81902 00:10:34.609 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:34.609 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:34.609 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 81902' 00:10:34.609 killing process with pid 81902 00:10:34.609 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@965 -- # kill 81902 00:10:34.609 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # wait 81902 00:10:34.867 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:34.867 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:34.867 00:10:34.867 real 0m3.841s 00:10:34.867 user 0m13.357s 00:10:34.867 sys 0m0.656s 00:10:34.867 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:34.867 11:52:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:34.867 ************************************ 00:10:34.867 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:34.867 ************************************ 00:10:34.867 11:52:33 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:34.867 11:52:33 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:34.867 11:52:33 nvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:10:34.867 11:52:33 nvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:34.867 11:52:33 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:34.867 ************************************ 00:10:34.867 START TEST nvme_fio 00:10:34.867 ************************************ 00:10:34.867 11:52:33 nvme.nvme_fio -- common/autotest_common.sh@1121 -- # nvme_fio_test 00:10:34.867 11:52:33 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:34.867 11:52:33 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:34.867 11:52:33 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:34.867 11:52:33 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:34.867 11:52:33 nvme.nvme_fio -- common/autotest_common.sh@1509 -- # local bdfs 00:10:34.867 11:52:33 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:34.867 11:52:33 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:34.867 11:52:33 nvme.nvme_fio -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:10:35.127 11:52:33 nvme.nvme_fio -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:10:35.127 11:52:33 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:35.127 11:52:33 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:35.127 11:52:33 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:35.127 11:52:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:35.127 11:52:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:35.127 11:52:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:35.386 11:52:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:35.386 11:52:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:35.645 11:52:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:35.645 11:52:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:35.645 11:52:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:35.904 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:35.904 fio-3.35 00:10:35.904 Starting 1 thread 00:10:42.525 00:10:42.525 test: (groupid=0, jobs=1): err= 0: pid=82055: Sun Jul 21 11:52:40 2024 00:10:42.525 read: IOPS=22.8k, BW=89.2MiB/s (93.5MB/s)(178MiB/2001msec) 00:10:42.525 slat (nsec): min=4359, max=73348, avg=5232.34, stdev=1196.05 00:10:42.525 clat (usec): min=258, max=13967, avg=2797.90, stdev=355.81 00:10:42.525 lat (usec): min=263, max=14041, avg=2803.13, stdev=356.38 00:10:42.525 clat percentiles (usec): 00:10:42.525 | 1.00th=[ 2474], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2671], 00:10:42.525 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:10:42.525 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 2966], 00:10:42.525 | 99.00th=[ 3097], 99.50th=[ 4359], 99.90th=[ 7767], 99.95th=[10683], 00:10:42.525 | 99.99th=[13566] 00:10:42.525 bw ( KiB/s): min=86640, max=93800, per=99.68%, avg=91029.33, stdev=3844.67, samples=3 00:10:42.525 iops : min=21660, max=23450, avg=22757.33, stdev=961.17, samples=3 00:10:42.525 write: IOPS=22.7k, BW=88.6MiB/s (92.9MB/s)(177MiB/2001msec); 0 zone resets 00:10:42.525 slat (nsec): min=4453, max=47946, avg=5380.56, stdev=1214.05 00:10:42.525 clat (usec): min=233, max=13680, avg=2805.08, stdev=364.55 00:10:42.525 lat (usec): min=238, max=13724, avg=2810.46, stdev=365.15 00:10:42.525 clat percentiles (usec): 00:10:42.525 | 1.00th=[ 2507], 5.00th=[ 2573], 10.00th=[ 2638], 20.00th=[ 2671], 00:10:42.525 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:10:42.525 | 70.00th=[ 2835], 80.00th=[ 2900], 90.00th=[ 2933], 95.00th=[ 2966], 00:10:42.525 | 99.00th=[ 3130], 99.50th=[ 4424], 99.90th=[ 8160], 99.95th=[11076], 00:10:42.525 | 99.99th=[13173] 00:10:42.525 bw ( KiB/s): min=86280, max=95008, per=100.00%, avg=91200.00, stdev=4468.99, samples=3 00:10:42.525 iops : min=21570, max=23752, avg=22800.00, stdev=1117.25, samples=3 00:10:42.525 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:42.525 lat (msec) : 2=0.05%, 4=99.28%, 10=0.56%, 20=0.06% 00:10:42.525 cpu : usr=99.30%, sys=0.10%, ctx=4, majf=0, minf=626 00:10:42.525 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:42.525 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:42.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:42.526 issued rwts: total=45683,45401,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:42.526 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:42.526 00:10:42.526 Run status group 0 (all jobs): 00:10:42.526 READ: bw=89.2MiB/s (93.5MB/s), 89.2MiB/s-89.2MiB/s (93.5MB/s-93.5MB/s), io=178MiB (187MB), run=2001-2001msec 00:10:42.526 WRITE: bw=88.6MiB/s (92.9MB/s), 88.6MiB/s-88.6MiB/s (92.9MB/s-92.9MB/s), io=177MiB (186MB), run=2001-2001msec 00:10:42.526 ----------------------------------------------------- 00:10:42.526 Suppressions used: 00:10:42.526 count bytes template 00:10:42.526 1 32 /usr/src/fio/parse.c 00:10:42.526 1 8 libtcmalloc_minimal.so 00:10:42.526 ----------------------------------------------------- 00:10:42.526 00:10:42.526 11:52:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:42.526 11:52:40 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:42.526 11:52:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:42.526 11:52:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:42.526 11:52:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:42.526 11:52:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:42.526 11:52:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:42.526 11:52:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:42.526 11:52:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:42.526 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:42.526 fio-3.35 00:10:42.526 Starting 1 thread 00:10:50.655 00:10:50.655 test: (groupid=0, jobs=1): err= 0: pid=82149: Sun Jul 21 11:52:48 2024 00:10:50.655 read: IOPS=22.9k, BW=89.3MiB/s (93.7MB/s)(179MiB/2001msec) 00:10:50.655 slat (nsec): min=4357, max=81930, avg=5147.97, stdev=1139.11 00:10:50.655 clat (usec): min=259, max=14543, avg=2793.12, stdev=372.18 00:10:50.655 lat (usec): min=264, max=14625, avg=2798.27, stdev=372.76 00:10:50.655 clat percentiles (usec): 00:10:50.655 | 1.00th=[ 2540], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:10:50.655 | 30.00th=[ 2737], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:10:50.655 | 70.00th=[ 2802], 80.00th=[ 2835], 90.00th=[ 2900], 95.00th=[ 2933], 00:10:50.655 | 99.00th=[ 3064], 99.50th=[ 4359], 99.90th=[ 8586], 99.95th=[11076], 00:10:50.655 | 99.99th=[14222] 00:10:50.655 bw ( KiB/s): min=87040, max=93224, per=99.22%, avg=90765.33, stdev=3280.82, samples=3 00:10:50.655 iops : min=21760, max=23306, avg=22691.33, stdev=820.21, samples=3 00:10:50.655 write: IOPS=22.7k, BW=88.8MiB/s (93.1MB/s)(178MiB/2001msec); 0 zone resets 00:10:50.655 slat (nsec): min=4420, max=49697, avg=5272.67, stdev=1167.76 00:10:50.655 clat (usec): min=280, max=14350, avg=2800.39, stdev=385.02 00:10:50.655 lat (usec): min=285, max=14367, avg=2805.66, stdev=385.63 00:10:50.655 clat percentiles (usec): 00:10:50.655 | 1.00th=[ 2573], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:10:50.655 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2769], 60.00th=[ 2802], 00:10:50.655 | 70.00th=[ 2835], 80.00th=[ 2835], 90.00th=[ 2900], 95.00th=[ 2933], 00:10:50.655 | 99.00th=[ 3064], 99.50th=[ 4424], 99.90th=[ 8848], 99.95th=[11469], 00:10:50.655 | 99.99th=[13960] 00:10:50.655 bw ( KiB/s): min=86744, max=93512, per=100.00%, avg=90954.67, stdev=3674.45, samples=3 00:10:50.655 iops : min=21686, max=23378, avg=22738.67, stdev=918.61, samples=3 00:10:50.655 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:50.655 lat (msec) : 2=0.05%, 4=99.24%, 10=0.61%, 20=0.07% 00:10:50.655 cpu : usr=99.25%, sys=0.10%, ctx=4, majf=0, minf=625 00:10:50.655 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:50.655 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:50.655 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:50.656 issued rwts: total=45762,45484,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:50.656 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:50.656 00:10:50.656 Run status group 0 (all jobs): 00:10:50.656 READ: bw=89.3MiB/s (93.7MB/s), 89.3MiB/s-89.3MiB/s (93.7MB/s-93.7MB/s), io=179MiB (187MB), run=2001-2001msec 00:10:50.656 WRITE: bw=88.8MiB/s (93.1MB/s), 88.8MiB/s-88.8MiB/s (93.1MB/s-93.1MB/s), io=178MiB (186MB), run=2001-2001msec 00:10:50.656 ----------------------------------------------------- 00:10:50.656 Suppressions used: 00:10:50.656 count bytes template 00:10:50.656 1 32 /usr/src/fio/parse.c 00:10:50.656 1 8 libtcmalloc_minimal.so 00:10:50.656 ----------------------------------------------------- 00:10:50.656 00:10:50.656 11:52:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:50.656 11:52:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:50.656 11:52:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:50.656 11:52:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:50.656 11:52:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:50.656 11:52:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:50.656 11:52:48 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:50.656 11:52:48 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:50.656 11:52:48 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:50.656 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:50.656 fio-3.35 00:10:50.656 Starting 1 thread 00:10:58.765 00:10:58.765 test: (groupid=0, jobs=1): err= 0: pid=82254: Sun Jul 21 11:52:56 2024 00:10:58.765 read: IOPS=22.8k, BW=89.2MiB/s (93.5MB/s)(178MiB/2001msec) 00:10:58.765 slat (nsec): min=4368, max=88130, avg=5327.76, stdev=1175.09 00:10:58.765 clat (usec): min=231, max=14202, avg=2797.70, stdev=364.26 00:10:58.765 lat (usec): min=236, max=14290, avg=2803.03, stdev=364.84 00:10:58.765 clat percentiles (usec): 00:10:58.765 | 1.00th=[ 2573], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:10:58.765 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2769], 60.00th=[ 2802], 00:10:58.765 | 70.00th=[ 2835], 80.00th=[ 2835], 90.00th=[ 2900], 95.00th=[ 2933], 00:10:58.765 | 99.00th=[ 3261], 99.50th=[ 4424], 99.90th=[ 8160], 99.95th=[11076], 00:10:58.765 | 99.99th=[13829] 00:10:58.765 bw ( KiB/s): min=87272, max=92304, per=99.01%, avg=90405.33, stdev=2733.78, samples=3 00:10:58.765 iops : min=21818, max=23076, avg=22601.33, stdev=683.45, samples=3 00:10:58.765 write: IOPS=22.7k, BW=88.6MiB/s (92.9MB/s)(177MiB/2001msec); 0 zone resets 00:10:58.765 slat (usec): min=4, max=197, avg= 5.49, stdev= 1.51 00:10:58.765 clat (usec): min=206, max=14007, avg=2805.60, stdev=376.55 00:10:58.765 lat (usec): min=212, max=14023, avg=2811.09, stdev=377.13 00:10:58.765 clat percentiles (usec): 00:10:58.765 | 1.00th=[ 2573], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2704], 00:10:58.765 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2769], 60.00th=[ 2802], 00:10:58.765 | 70.00th=[ 2835], 80.00th=[ 2835], 90.00th=[ 2900], 95.00th=[ 2933], 00:10:58.765 | 99.00th=[ 3359], 99.50th=[ 4424], 99.90th=[ 8356], 99.95th=[11469], 00:10:58.765 | 99.99th=[13566] 00:10:58.765 bw ( KiB/s): min=86952, max=93200, per=99.90%, avg=90656.00, stdev=3281.55, samples=3 00:10:58.766 iops : min=21738, max=23300, avg=22664.00, stdev=820.39, samples=3 00:10:58.766 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:58.766 lat (msec) : 2=0.33%, 4=98.91%, 10=0.66%, 20=0.07% 00:10:58.766 cpu : usr=99.30%, sys=0.05%, ctx=2, majf=0, minf=626 00:10:58.766 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:58.766 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:58.766 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:58.766 issued rwts: total=45676,45398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:58.766 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:58.766 00:10:58.766 Run status group 0 (all jobs): 00:10:58.766 READ: bw=89.2MiB/s (93.5MB/s), 89.2MiB/s-89.2MiB/s (93.5MB/s-93.5MB/s), io=178MiB (187MB), run=2001-2001msec 00:10:58.766 WRITE: bw=88.6MiB/s (92.9MB/s), 88.6MiB/s-88.6MiB/s (92.9MB/s-92.9MB/s), io=177MiB (186MB), run=2001-2001msec 00:10:58.766 ----------------------------------------------------- 00:10:58.766 Suppressions used: 00:10:58.766 count bytes template 00:10:58.766 1 32 /usr/src/fio/parse.c 00:10:58.766 1 8 libtcmalloc_minimal.so 00:10:58.766 ----------------------------------------------------- 00:10:58.766 00:10:58.766 11:52:56 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:58.766 11:52:56 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:58.766 11:52:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:58.766 11:52:56 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:58.766 11:52:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:58.766 11:52:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:58.766 11:52:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:58.766 11:52:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1335 -- # local sanitizers 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # shift 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local asan_lib= 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # grep libasan 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # break 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:58.766 11:52:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:58.766 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:58.766 fio-3.35 00:10:58.766 Starting 1 thread 00:11:05.337 00:11:05.337 test: (groupid=0, jobs=1): err= 0: pid=82360: Sun Jul 21 11:53:04 2024 00:11:05.337 read: IOPS=23.4k, BW=91.2MiB/s (95.7MB/s)(183MiB/2001msec) 00:11:05.337 slat (nsec): min=4311, max=53636, avg=5172.68, stdev=871.71 00:11:05.337 clat (usec): min=218, max=9311, avg=2738.07, stdev=233.81 00:11:05.337 lat (usec): min=224, max=9353, avg=2743.24, stdev=234.03 00:11:05.337 clat percentiles (usec): 00:11:05.337 | 1.00th=[ 2507], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2671], 00:11:05.337 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:11:05.337 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2835], 95.00th=[ 2868], 00:11:05.337 | 99.00th=[ 3163], 99.50th=[ 4228], 99.90th=[ 5407], 99.95th=[ 6521], 00:11:05.337 | 99.99th=[ 8979] 00:11:05.337 bw ( KiB/s): min=91048, max=93864, per=99.37%, avg=92832.00, stdev=1551.32, samples=3 00:11:05.337 iops : min=22762, max=23466, avg=23208.00, stdev=387.83, samples=3 00:11:05.337 write: IOPS=23.2k, BW=90.7MiB/s (95.1MB/s)(181MiB/2001msec); 0 zone resets 00:11:05.337 slat (nsec): min=4427, max=26165, avg=5350.51, stdev=835.09 00:11:05.337 clat (usec): min=238, max=9113, avg=2741.61, stdev=241.21 00:11:05.337 lat (usec): min=243, max=9124, avg=2746.96, stdev=241.42 00:11:05.337 clat percentiles (usec): 00:11:05.337 | 1.00th=[ 2507], 5.00th=[ 2606], 10.00th=[ 2638], 20.00th=[ 2671], 00:11:05.337 | 30.00th=[ 2671], 40.00th=[ 2704], 50.00th=[ 2737], 60.00th=[ 2737], 00:11:05.337 | 70.00th=[ 2769], 80.00th=[ 2802], 90.00th=[ 2835], 95.00th=[ 2868], 00:11:05.337 | 99.00th=[ 3294], 99.50th=[ 4293], 99.90th=[ 5538], 99.95th=[ 7111], 00:11:05.337 | 99.99th=[ 8717] 00:11:05.337 bw ( KiB/s): min=90504, max=95472, per=100.00%, avg=92941.33, stdev=2485.31, samples=3 00:11:05.337 iops : min=22626, max=23868, avg=23235.33, stdev=621.33, samples=3 00:11:05.337 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:05.337 lat (msec) : 2=0.30%, 4=99.04%, 10=0.62% 00:11:05.337 cpu : usr=99.40%, sys=0.10%, ctx=4, majf=0, minf=624 00:11:05.337 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:05.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:05.337 issued rwts: total=46734,46441,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.337 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:05.337 00:11:05.337 Run status group 0 (all jobs): 00:11:05.337 READ: bw=91.2MiB/s (95.7MB/s), 91.2MiB/s-91.2MiB/s (95.7MB/s-95.7MB/s), io=183MiB (191MB), run=2001-2001msec 00:11:05.337 WRITE: bw=90.7MiB/s (95.1MB/s), 90.7MiB/s-90.7MiB/s (95.1MB/s-95.1MB/s), io=181MiB (190MB), run=2001-2001msec 00:11:05.596 ----------------------------------------------------- 00:11:05.596 Suppressions used: 00:11:05.596 count bytes template 00:11:05.596 1 32 /usr/src/fio/parse.c 00:11:05.596 1 8 libtcmalloc_minimal.so 00:11:05.596 ----------------------------------------------------- 00:11:05.596 00:11:05.596 11:53:04 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:05.596 11:53:04 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:11:05.596 00:11:05.596 real 0m30.560s 00:11:05.596 user 0m16.817s 00:11:05.596 sys 0m25.943s 00:11:05.596 11:53:04 nvme.nvme_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:05.596 11:53:04 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:11:05.596 ************************************ 00:11:05.596 END TEST nvme_fio 00:11:05.596 ************************************ 00:11:05.596 00:11:05.596 real 1m41.321s 00:11:05.596 user 3m39.368s 00:11:05.596 sys 0m36.849s 00:11:05.596 11:53:04 nvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:05.596 11:53:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:11:05.596 ************************************ 00:11:05.596 END TEST nvme 00:11:05.596 ************************************ 00:11:05.596 11:53:04 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:11:05.596 11:53:04 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:05.596 11:53:04 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:05.596 11:53:04 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:05.596 11:53:04 -- common/autotest_common.sh@10 -- # set +x 00:11:05.596 ************************************ 00:11:05.596 START TEST nvme_scc 00:11:05.596 ************************************ 00:11:05.596 11:53:04 nvme_scc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:05.855 * Looking for test storage... 00:11:05.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:05.855 11:53:04 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:05.855 11:53:04 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:05.855 11:53:04 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:05.855 11:53:04 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:05.855 11:53:04 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:05.855 11:53:04 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:05.855 11:53:04 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:05.855 11:53:04 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:05.855 11:53:04 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.855 11:53:04 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.855 11:53:04 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.855 11:53:04 nvme_scc -- paths/export.sh@5 -- # export PATH 00:11:05.855 11:53:04 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:05.855 11:53:04 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:11:05.855 11:53:04 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:05.855 11:53:04 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:11:05.855 11:53:04 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:05.855 11:53:04 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:11:05.855 11:53:04 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:05.855 11:53:04 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:05.855 11:53:04 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:05.855 11:53:04 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:11:05.855 11:53:04 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:05.855 11:53:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:11:05.855 11:53:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:05.855 11:53:04 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:05.855 11:53:04 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:06.423 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:06.423 Waiting for block devices as requested 00:11:06.682 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.682 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.682 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.941 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:12.224 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:12.224 11:53:10 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:12.224 11:53:10 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:12.224 11:53:10 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:12.224 11:53:10 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:12.224 11:53:10 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:12.224 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.225 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:12.226 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:12.227 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:12.228 11:53:10 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:12.228 11:53:10 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:12.228 11:53:10 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:12.228 11:53:10 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.228 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.229 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.230 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:12.231 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:12.232 11:53:10 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:12.232 11:53:10 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:12.232 11:53:10 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:12.232 11:53:10 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.232 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.233 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:12.234 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.235 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.236 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:12.237 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:12.238 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:12.239 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:12.240 11:53:10 nvme_scc -- scripts/common.sh@15 -- # local i 00:11:12.240 11:53:10 nvme_scc -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:12.240 11:53:10 nvme_scc -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:12.240 11:53:10 nvme_scc -- scripts/common.sh@24 -- # return 0 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@18 -- # shift 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.240 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:12.241 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:12.242 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:12.243 11:53:10 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@197 -- # echo nvme1 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@197 -- # echo nvme0 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@197 -- # echo nvme3 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@197 -- # echo nvme2 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@206 -- # echo nvme1 00:11:12.243 11:53:10 nvme_scc -- nvme/functions.sh@207 -- # return 0 00:11:12.243 11:53:10 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:12.243 11:53:10 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:11:12.243 11:53:10 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:12.811 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:13.788 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:13.788 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:13.788 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:13.788 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:13.788 11:53:12 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:13.788 11:53:12 nvme_scc -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:13.788 11:53:12 nvme_scc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:13.788 11:53:12 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:13.788 ************************************ 00:11:13.788 START TEST nvme_simple_copy 00:11:13.788 ************************************ 00:11:13.788 11:53:12 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:11:14.051 Initializing NVMe Controllers 00:11:14.051 Attaching to 0000:00:10.0 00:11:14.051 Controller supports SCC. Attached to 0000:00:10.0 00:11:14.051 Namespace ID: 1 size: 6GB 00:11:14.051 Initialization complete. 00:11:14.051 00:11:14.051 Controller QEMU NVMe Ctrl (12340 ) 00:11:14.051 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:14.051 Namespace Block Size:4096 00:11:14.051 Writing LBAs 0 to 63 with Random Data 00:11:14.051 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:14.051 LBAs matching Written Data: 64 00:11:14.051 00:11:14.051 real 0m0.256s 00:11:14.051 user 0m0.085s 00:11:14.051 sys 0m0.070s 00:11:14.051 11:53:12 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:14.051 ************************************ 00:11:14.051 END TEST nvme_simple_copy 00:11:14.051 ************************************ 00:11:14.051 11:53:12 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:11:14.051 00:11:14.051 real 0m8.439s 00:11:14.051 user 0m1.314s 00:11:14.051 sys 0m2.180s 00:11:14.051 11:53:12 nvme_scc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:14.051 11:53:12 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:11:14.051 ************************************ 00:11:14.051 END TEST nvme_scc 00:11:14.051 ************************************ 00:11:14.051 11:53:12 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:11:14.051 11:53:12 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:11:14.051 11:53:12 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:14.051 11:53:12 -- spdk/autotest.sh@232 -- # [[ 1 -eq 1 ]] 00:11:14.051 11:53:12 -- spdk/autotest.sh@233 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:14.051 11:53:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:14.051 11:53:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:14.051 11:53:12 -- common/autotest_common.sh@10 -- # set +x 00:11:14.051 ************************************ 00:11:14.051 START TEST nvme_fdp 00:11:14.051 ************************************ 00:11:14.051 11:53:12 nvme_fdp -- common/autotest_common.sh@1121 -- # test/nvme/nvme_fdp.sh 00:11:14.308 * Looking for test storage... 00:11:14.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:14.308 11:53:12 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:14.308 11:53:13 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:14.308 11:53:13 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:14.308 11:53:13 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:14.309 11:53:13 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:14.309 11:53:13 nvme_fdp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.309 11:53:13 nvme_fdp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.309 11:53:13 nvme_fdp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.309 11:53:13 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.309 11:53:13 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.309 11:53:13 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.309 11:53:13 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:11:14.309 11:53:13 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.309 11:53:13 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:11:14.309 11:53:13 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:14.309 11:53:13 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:11:14.309 11:53:13 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:14.309 11:53:13 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:11:14.309 11:53:13 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:14.309 11:53:13 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:14.309 11:53:13 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:14.309 11:53:13 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:11:14.309 11:53:13 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:14.309 11:53:13 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:14.875 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:14.875 Waiting for block devices as requested 00:11:15.134 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:15.134 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:15.134 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:15.392 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:20.675 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:20.675 11:53:19 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:11:20.675 11:53:19 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:20.675 11:53:19 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:20.675 11:53:19 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:20.675 11:53:19 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.675 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:20.676 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:11:20.677 11:53:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.678 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:11:20.679 11:53:19 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:20.679 11:53:19 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:20.679 11:53:19 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:20.679 11:53:19 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.679 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.680 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:20.681 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.682 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.683 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:11:20.684 11:53:19 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:20.684 11:53:19 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:20.684 11:53:19 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:20.684 11:53:19 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.684 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.685 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.686 11:53:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.687 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.688 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:20.689 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:11:20.690 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:11:20.691 11:53:19 nvme_fdp -- scripts/common.sh@15 -- # local i 00:11:20.691 11:53:19 nvme_fdp -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:20.691 11:53:19 nvme_fdp -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:20.691 11:53:19 nvme_fdp -- scripts/common.sh@24 -- # return 0 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.691 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:20.692 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.693 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:20.694 11:53:19 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@197 -- # echo nvme3 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:20.694 11:53:19 nvme_fdp -- nvme/functions.sh@206 -- # echo nvme3 00:11:20.695 11:53:19 nvme_fdp -- nvme/functions.sh@207 -- # return 0 00:11:20.695 11:53:19 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:11:20.695 11:53:19 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:11:20.695 11:53:19 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:21.632 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:22.200 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:22.200 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:22.200 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:22.200 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:22.200 11:53:21 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:22.200 11:53:21 nvme_fdp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:11:22.200 11:53:21 nvme_fdp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:22.200 11:53:21 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:22.200 ************************************ 00:11:22.200 START TEST nvme_flexible_data_placement 00:11:22.200 ************************************ 00:11:22.200 11:53:21 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:11:22.459 Initializing NVMe Controllers 00:11:22.459 Attaching to 0000:00:13.0 00:11:22.459 Controller supports FDP Attached to 0000:00:13.0 00:11:22.459 Namespace ID: 1 Endurance Group ID: 1 00:11:22.459 Initialization complete. 00:11:22.459 00:11:22.459 ================================== 00:11:22.459 == FDP tests for Namespace: #01 == 00:11:22.459 ================================== 00:11:22.459 00:11:22.459 Get Feature: FDP: 00:11:22.459 ================= 00:11:22.459 Enabled: Yes 00:11:22.459 FDP configuration Index: 0 00:11:22.459 00:11:22.459 FDP configurations log page 00:11:22.459 =========================== 00:11:22.459 Number of FDP configurations: 1 00:11:22.459 Version: 0 00:11:22.459 Size: 112 00:11:22.459 FDP Configuration Descriptor: 0 00:11:22.459 Descriptor Size: 96 00:11:22.459 Reclaim Group Identifier format: 2 00:11:22.459 FDP Volatile Write Cache: Not Present 00:11:22.459 FDP Configuration: Valid 00:11:22.459 Vendor Specific Size: 0 00:11:22.459 Number of Reclaim Groups: 2 00:11:22.459 Number of Recalim Unit Handles: 8 00:11:22.459 Max Placement Identifiers: 128 00:11:22.459 Number of Namespaces Suppprted: 256 00:11:22.459 Reclaim unit Nominal Size: 6000000 bytes 00:11:22.459 Estimated Reclaim Unit Time Limit: Not Reported 00:11:22.460 RUH Desc #000: RUH Type: Initially Isolated 00:11:22.460 RUH Desc #001: RUH Type: Initially Isolated 00:11:22.460 RUH Desc #002: RUH Type: Initially Isolated 00:11:22.460 RUH Desc #003: RUH Type: Initially Isolated 00:11:22.460 RUH Desc #004: RUH Type: Initially Isolated 00:11:22.460 RUH Desc #005: RUH Type: Initially Isolated 00:11:22.460 RUH Desc #006: RUH Type: Initially Isolated 00:11:22.460 RUH Desc #007: RUH Type: Initially Isolated 00:11:22.460 00:11:22.460 FDP reclaim unit handle usage log page 00:11:22.460 ====================================== 00:11:22.460 Number of Reclaim Unit Handles: 8 00:11:22.460 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:22.460 RUH Usage Desc #001: RUH Attributes: Unused 00:11:22.460 RUH Usage Desc #002: RUH Attributes: Unused 00:11:22.460 RUH Usage Desc #003: RUH Attributes: Unused 00:11:22.460 RUH Usage Desc #004: RUH Attributes: Unused 00:11:22.460 RUH Usage Desc #005: RUH Attributes: Unused 00:11:22.460 RUH Usage Desc #006: RUH Attributes: Unused 00:11:22.460 RUH Usage Desc #007: RUH Attributes: Unused 00:11:22.460 00:11:22.460 FDP statistics log page 00:11:22.460 ======================= 00:11:22.460 Host bytes with metadata written: 1544962048 00:11:22.460 Media bytes with metadata written: 1545195520 00:11:22.460 Media bytes erased: 0 00:11:22.460 00:11:22.460 FDP Reclaim unit handle status 00:11:22.460 ============================== 00:11:22.460 Number of RUHS descriptors: 2 00:11:22.460 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003e9c 00:11:22.460 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:22.460 00:11:22.460 FDP write on placement id: 0 success 00:11:22.460 00:11:22.460 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:22.460 00:11:22.460 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:22.460 00:11:22.460 Get Feature: FDP Events for Placement handle: #0 00:11:22.460 ======================== 00:11:22.460 Number of FDP Events: 6 00:11:22.460 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:22.460 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:22.460 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:22.460 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:22.460 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:22.460 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:22.460 00:11:22.460 FDP events log page 00:11:22.460 =================== 00:11:22.460 Number of FDP events: 1 00:11:22.460 FDP Event #0: 00:11:22.460 Event Type: RU Not Written to Capacity 00:11:22.460 Placement Identifier: Valid 00:11:22.460 NSID: Valid 00:11:22.460 Location: Valid 00:11:22.460 Placement Identifier: 0 00:11:22.460 Event Timestamp: 3 00:11:22.460 Namespace Identifier: 1 00:11:22.460 Reclaim Group Identifier: 0 00:11:22.460 Reclaim Unit Handle Identifier: 0 00:11:22.460 00:11:22.460 FDP test passed 00:11:22.460 00:11:22.460 real 0m0.250s 00:11:22.460 user 0m0.070s 00:11:22.460 sys 0m0.079s 00:11:22.460 11:53:21 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:22.460 11:53:21 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:11:22.460 ************************************ 00:11:22.460 END TEST nvme_flexible_data_placement 00:11:22.460 ************************************ 00:11:22.719 00:11:22.719 real 0m8.448s 00:11:22.719 user 0m1.319s 00:11:22.719 sys 0m2.167s 00:11:22.719 11:53:21 nvme_fdp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:22.719 11:53:21 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:11:22.719 ************************************ 00:11:22.719 END TEST nvme_fdp 00:11:22.719 ************************************ 00:11:22.719 11:53:21 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:11:22.719 11:53:21 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:22.719 11:53:21 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:22.719 11:53:21 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:22.719 11:53:21 -- common/autotest_common.sh@10 -- # set +x 00:11:22.719 ************************************ 00:11:22.719 START TEST nvme_rpc 00:11:22.719 ************************************ 00:11:22.719 11:53:21 nvme_rpc -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:22.719 * Looking for test storage... 00:11:22.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:22.719 11:53:21 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:22.719 11:53:21 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:22.719 11:53:21 nvme_rpc -- common/autotest_common.sh@1520 -- # bdfs=() 00:11:22.719 11:53:21 nvme_rpc -- common/autotest_common.sh@1520 -- # local bdfs 00:11:22.719 11:53:21 nvme_rpc -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:11:22.719 11:53:21 nvme_rpc -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:11:22.719 11:53:21 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:11:22.719 11:53:21 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:11:22.719 11:53:21 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:22.719 11:53:21 nvme_rpc -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:11:22.719 11:53:21 nvme_rpc -- common/autotest_common.sh@1510 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:22.979 11:53:21 nvme_rpc -- common/autotest_common.sh@1511 -- # (( 4 == 0 )) 00:11:22.979 11:53:21 nvme_rpc -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:22.979 11:53:21 nvme_rpc -- common/autotest_common.sh@1523 -- # echo 0000:00:10.0 00:11:22.979 11:53:21 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:11:22.979 11:53:21 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:22.979 11:53:21 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=83751 00:11:22.979 11:53:21 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:22.979 11:53:21 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 83751 00:11:22.979 11:53:21 nvme_rpc -- common/autotest_common.sh@827 -- # '[' -z 83751 ']' 00:11:22.979 11:53:21 nvme_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.979 11:53:21 nvme_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:22.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.979 11:53:21 nvme_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.979 11:53:21 nvme_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:22.979 11:53:21 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.979 [2024-07-21 11:53:21.736767] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:22.979 [2024-07-21 11:53:21.736902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83751 ] 00:11:23.238 [2024-07-21 11:53:21.893899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:23.238 [2024-07-21 11:53:21.942757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.238 [2024-07-21 11:53:21.942921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.807 11:53:22 nvme_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:23.807 11:53:22 nvme_rpc -- common/autotest_common.sh@860 -- # return 0 00:11:23.807 11:53:22 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:11:24.067 Nvme0n1 00:11:24.067 11:53:22 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:24.067 11:53:22 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:24.067 request: 00:11:24.067 { 00:11:24.067 "filename": "non_existing_file", 00:11:24.067 "bdev_name": "Nvme0n1", 00:11:24.067 "method": "bdev_nvme_apply_firmware", 00:11:24.067 "req_id": 1 00:11:24.067 } 00:11:24.067 Got JSON-RPC error response 00:11:24.067 response: 00:11:24.067 { 00:11:24.067 "code": -32603, 00:11:24.067 "message": "open file failed." 00:11:24.067 } 00:11:24.067 11:53:22 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:24.067 11:53:22 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:24.067 11:53:22 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:24.327 11:53:23 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:24.327 11:53:23 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 83751 00:11:24.327 11:53:23 nvme_rpc -- common/autotest_common.sh@946 -- # '[' -z 83751 ']' 00:11:24.327 11:53:23 nvme_rpc -- common/autotest_common.sh@950 -- # kill -0 83751 00:11:24.327 11:53:23 nvme_rpc -- common/autotest_common.sh@951 -- # uname 00:11:24.327 11:53:23 nvme_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:24.327 11:53:23 nvme_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83751 00:11:24.327 11:53:23 nvme_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:24.327 11:53:23 nvme_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:24.327 11:53:23 nvme_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83751' 00:11:24.327 killing process with pid 83751 00:11:24.327 11:53:23 nvme_rpc -- common/autotest_common.sh@965 -- # kill 83751 00:11:24.327 11:53:23 nvme_rpc -- common/autotest_common.sh@970 -- # wait 83751 00:11:24.894 00:11:24.894 real 0m2.113s 00:11:24.894 user 0m3.791s 00:11:24.894 sys 0m0.609s 00:11:24.894 11:53:23 nvme_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:24.894 11:53:23 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.894 ************************************ 00:11:24.894 END TEST nvme_rpc 00:11:24.894 ************************************ 00:11:24.894 11:53:23 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:24.894 11:53:23 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:24.894 11:53:23 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:24.894 11:53:23 -- common/autotest_common.sh@10 -- # set +x 00:11:24.894 ************************************ 00:11:24.894 START TEST nvme_rpc_timeouts 00:11:24.894 ************************************ 00:11:24.895 11:53:23 nvme_rpc_timeouts -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:24.895 * Looking for test storage... 00:11:24.895 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:24.895 11:53:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:24.895 11:53:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_83805 00:11:24.895 11:53:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_83805 00:11:24.895 11:53:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=83829 00:11:24.895 11:53:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:24.895 11:53:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:24.895 11:53:23 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 83829 00:11:24.895 11:53:23 nvme_rpc_timeouts -- common/autotest_common.sh@827 -- # '[' -z 83829 ']' 00:11:24.895 11:53:23 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.895 11:53:23 nvme_rpc_timeouts -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:24.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.895 11:53:23 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.895 11:53:23 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:24.895 11:53:23 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:25.154 [2024-07-21 11:53:23.790984] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:11:25.154 [2024-07-21 11:53:23.791226] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83829 ] 00:11:25.154 [2024-07-21 11:53:23.954500] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:25.154 [2024-07-21 11:53:24.002752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.154 [2024-07-21 11:53:24.002925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.722 11:53:24 nvme_rpc_timeouts -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:25.722 11:53:24 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # return 0 00:11:25.722 Checking default timeout settings: 00:11:25.722 11:53:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:25.722 11:53:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:26.321 Making settings changes with rpc: 00:11:26.321 11:53:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:26.321 11:53:24 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:26.321 Check default vs. modified settings: 00:11:26.321 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:26.321 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_83805 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_83805 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:26.583 Setting action_on_timeout is changed as expected. 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_83805 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_83805 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:26.583 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:26.843 Setting timeout_us is changed as expected. 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_83805 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_83805 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:26.843 Setting timeout_admin_us is changed as expected. 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_83805 /tmp/settings_modified_83805 00:11:26.843 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 83829 00:11:26.843 11:53:25 nvme_rpc_timeouts -- common/autotest_common.sh@946 -- # '[' -z 83829 ']' 00:11:26.843 11:53:25 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # kill -0 83829 00:11:26.843 11:53:25 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # uname 00:11:26.843 11:53:25 nvme_rpc_timeouts -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:26.843 11:53:25 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 83829 00:11:26.843 11:53:25 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:26.843 11:53:25 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:26.843 killing process with pid 83829 00:11:26.843 11:53:25 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # echo 'killing process with pid 83829' 00:11:26.843 11:53:25 nvme_rpc_timeouts -- common/autotest_common.sh@965 -- # kill 83829 00:11:26.843 11:53:25 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # wait 83829 00:11:27.102 RPC TIMEOUT SETTING TEST PASSED. 00:11:27.102 11:53:25 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:27.102 00:11:27.102 real 0m2.312s 00:11:27.102 user 0m4.438s 00:11:27.102 sys 0m0.623s 00:11:27.102 11:53:25 nvme_rpc_timeouts -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:27.102 11:53:25 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:27.102 ************************************ 00:11:27.102 END TEST nvme_rpc_timeouts 00:11:27.102 ************************************ 00:11:27.102 11:53:25 -- spdk/autotest.sh@243 -- # uname -s 00:11:27.102 11:53:25 -- spdk/autotest.sh@243 -- # '[' Linux = Linux ']' 00:11:27.103 11:53:25 -- spdk/autotest.sh@244 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:27.103 11:53:25 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:27.103 11:53:25 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:27.103 11:53:25 -- common/autotest_common.sh@10 -- # set +x 00:11:27.103 ************************************ 00:11:27.103 START TEST sw_hotplug 00:11:27.103 ************************************ 00:11:27.103 11:53:25 sw_hotplug -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:27.362 * Looking for test storage... 00:11:27.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:27.362 11:53:26 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:27.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:27.931 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:27.931 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:27.931 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:27.931 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:28.191 11:53:26 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # hotplug_wait=6 00:11:28.191 11:53:26 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # hotplug_events=3 00:11:28.191 11:53:26 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvmes=($(nvme_in_userspace)) 00:11:28.191 11:53:26 sw_hotplug -- nvme/sw_hotplug.sh@126 -- # nvme_in_userspace 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@309 -- # local bdf bdfs 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@310 -- # local nvmes 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@312 -- # [[ -n '' ]] 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@295 -- # local bdf= 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@230 -- # local class 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@231 -- # local subclass 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@232 -- # local progif 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@233 -- # printf %02x 1 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@233 -- # class=01 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@234 -- # printf %02x 8 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@234 -- # subclass=08 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@235 -- # printf %02x 2 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@235 -- # progif=02 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@237 -- # hash lspci 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@239 -- # lspci -mm -n -D 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@240 -- # grep -i -- -p02 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@242 -- # tr -d '"' 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:12.0 00:11:28.191 11:53:26 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:12.0 ]] 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:12.0 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@298 -- # pci_can_use 0000:00:13.0 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@15 -- # local i 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@18 -- # [[ =~ 0000:00:13.0 ]] 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@24 -- # return 0 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@299 -- # echo 0000:00:13.0 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@320 -- # uname -s 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@325 -- # (( 4 )) 00:11:28.192 11:53:26 sw_hotplug -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:28.192 11:53:26 sw_hotplug -- nvme/sw_hotplug.sh@127 -- # nvme_count=2 00:11:28.192 11:53:26 sw_hotplug -- nvme/sw_hotplug.sh@128 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:28.192 11:53:26 sw_hotplug -- nvme/sw_hotplug.sh@130 -- # xtrace_disable 00:11:28.192 11:53:26 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:28.192 11:53:27 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # run_hotplug 00:11:28.192 11:53:27 sw_hotplug -- nvme/sw_hotplug.sh@65 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:28.192 11:53:27 sw_hotplug -- nvme/sw_hotplug.sh@73 -- # hotplug_pid=84167 00:11:28.192 11:53:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:28.192 11:53:27 sw_hotplug -- nvme/sw_hotplug.sh@75 -- # debug_remove_attach_helper 3 6 false 00:11:28.192 11:53:27 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:11:28.192 11:53:27 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 false 00:11:28.192 11:53:27 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:11:28.192 11:53:27 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:11:28.192 11:53:27 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:11:28.192 11:53:27 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 false 00:11:28.192 11:53:27 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:11:28.192 11:53:27 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:11:28.192 11:53:27 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=false 00:11:28.192 11:53:27 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:11:28.192 11:53:27 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:11:28.452 Initializing NVMe Controllers 00:11:28.452 Attaching to 0000:00:10.0 00:11:28.452 Attaching to 0000:00:11.0 00:11:28.452 Attaching to 0000:00:12.0 00:11:28.452 Attaching to 0000:00:13.0 00:11:28.452 Attached to 0000:00:10.0 00:11:28.452 Attached to 0000:00:11.0 00:11:28.452 Attached to 0000:00:13.0 00:11:28.452 Attached to 0000:00:12.0 00:11:28.452 Initialization complete. Starting I/O... 00:11:28.452 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:28.452 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:28.452 QEMU NVMe Ctrl (12343 ): 0 I/Os completed (+0) 00:11:28.452 QEMU NVMe Ctrl (12342 ): 0 I/Os completed (+0) 00:11:28.452 00:11:29.391 QEMU NVMe Ctrl (12340 ): 1676 I/Os completed (+1676) 00:11:29.391 QEMU NVMe Ctrl (12341 ): 1676 I/Os completed (+1676) 00:11:29.391 QEMU NVMe Ctrl (12343 ): 1676 I/Os completed (+1676) 00:11:29.391 QEMU NVMe Ctrl (12342 ): 1679 I/Os completed (+1679) 00:11:29.391 00:11:30.777 QEMU NVMe Ctrl (12340 ): 3846 I/Os completed (+2170) 00:11:30.777 QEMU NVMe Ctrl (12341 ): 3846 I/Os completed (+2170) 00:11:30.777 QEMU NVMe Ctrl (12343 ): 3843 I/Os completed (+2167) 00:11:30.777 QEMU NVMe Ctrl (12342 ): 3851 I/Os completed (+2172) 00:11:30.777 00:11:31.715 QEMU NVMe Ctrl (12340 ): 6041 I/Os completed (+2195) 00:11:31.715 QEMU NVMe Ctrl (12341 ): 6067 I/Os completed (+2221) 00:11:31.715 QEMU NVMe Ctrl (12343 ): 6088 I/Os completed (+2245) 00:11:31.715 QEMU NVMe Ctrl (12342 ): 6195 I/Os completed (+2344) 00:11:31.715 00:11:32.662 QEMU NVMe Ctrl (12340 ): 8437 I/Os completed (+2396) 00:11:32.662 QEMU NVMe Ctrl (12341 ): 8465 I/Os completed (+2398) 00:11:32.662 QEMU NVMe Ctrl (12343 ): 8484 I/Os completed (+2396) 00:11:32.662 QEMU NVMe Ctrl (12342 ): 8593 I/Os completed (+2398) 00:11:32.662 00:11:33.598 QEMU NVMe Ctrl (12340 ): 10824 I/Os completed (+2387) 00:11:33.598 QEMU NVMe Ctrl (12341 ): 10853 I/Os completed (+2388) 00:11:33.598 QEMU NVMe Ctrl (12343 ): 10868 I/Os completed (+2384) 00:11:33.598 QEMU NVMe Ctrl (12342 ): 10983 I/Os completed (+2390) 00:11:33.598 00:11:34.179 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:11:34.179 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:34.179 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:34.179 [2024-07-21 11:53:33.029374] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:34.179 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:34.179 [2024-07-21 11:53:33.030616] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.179 [2024-07-21 11:53:33.030659] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.179 [2024-07-21 11:53:33.030674] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.179 [2024-07-21 11:53:33.030689] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.179 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:34.179 [2024-07-21 11:53:33.032811] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.179 [2024-07-21 11:53:33.032874] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.179 [2024-07-21 11:53:33.032889] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.179 [2024-07-21 11:53:33.032905] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.438 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:34.438 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:34.438 [2024-07-21 11:53:33.064406] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:34.438 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:34.438 [2024-07-21 11:53:33.065576] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.438 [2024-07-21 11:53:33.065622] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.438 [2024-07-21 11:53:33.065640] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.438 [2024-07-21 11:53:33.065655] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.438 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:34.438 [2024-07-21 11:53:33.067094] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.438 [2024-07-21 11:53:33.067136] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.438 [2024-07-21 11:53:33.067156] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.438 [2024-07-21 11:53:33.067174] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.438 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:34.438 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:11:34.438 EAL: Scan for (pci) bus failed. 00:11:34.438 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:11:34.438 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:34.438 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:34.438 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:11:34.438 QEMU NVMe Ctrl (12343 ): 13286 I/Os completed (+2418) 00:11:34.438 QEMU NVMe Ctrl (12342 ): 13401 I/Os completed (+2418) 00:11:34.438 00:11:34.438 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:11:34.438 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:34.438 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:34.438 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:34.438 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:11:34.438 Attaching to 0000:00:10.0 00:11:34.438 Attached to 0000:00:10.0 00:11:34.696 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:11:34.696 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:34.696 11:53:33 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:11:34.696 Attaching to 0000:00:11.0 00:11:34.696 Attached to 0000:00:11.0 00:11:35.638 QEMU NVMe Ctrl (12343 ): 15790 I/Os completed (+2504) 00:11:35.638 QEMU NVMe Ctrl (12342 ): 15935 I/Os completed (+2534) 00:11:35.638 QEMU NVMe Ctrl (12340 ): 2425 I/Os completed (+2425) 00:11:35.638 QEMU NVMe Ctrl (12341 ): 2158 I/Os completed (+2158) 00:11:35.638 00:11:36.584 QEMU NVMe Ctrl (12343 ): 18114 I/Os completed (+2324) 00:11:36.584 QEMU NVMe Ctrl (12342 ): 18266 I/Os completed (+2331) 00:11:36.584 QEMU NVMe Ctrl (12340 ): 4749 I/Os completed (+2324) 00:11:36.584 QEMU NVMe Ctrl (12341 ): 4485 I/Os completed (+2327) 00:11:36.584 00:11:37.521 QEMU NVMe Ctrl (12343 ): 20462 I/Os completed (+2348) 00:11:37.521 QEMU NVMe Ctrl (12342 ): 20614 I/Os completed (+2348) 00:11:37.521 QEMU NVMe Ctrl (12340 ): 7097 I/Os completed (+2348) 00:11:37.521 QEMU NVMe Ctrl (12341 ): 6838 I/Os completed (+2353) 00:11:37.521 00:11:38.456 QEMU NVMe Ctrl (12343 ): 22710 I/Os completed (+2248) 00:11:38.456 QEMU NVMe Ctrl (12342 ): 22863 I/Os completed (+2249) 00:11:38.456 QEMU NVMe Ctrl (12340 ): 9348 I/Os completed (+2251) 00:11:38.456 QEMU NVMe Ctrl (12341 ): 9088 I/Os completed (+2250) 00:11:38.456 00:11:39.389 QEMU NVMe Ctrl (12343 ): 25042 I/Os completed (+2332) 00:11:39.389 QEMU NVMe Ctrl (12342 ): 25195 I/Os completed (+2332) 00:11:39.389 QEMU NVMe Ctrl (12340 ): 11682 I/Os completed (+2334) 00:11:39.389 QEMU NVMe Ctrl (12341 ): 11423 I/Os completed (+2335) 00:11:39.389 00:11:40.762 QEMU NVMe Ctrl (12343 ): 27389 I/Os completed (+2347) 00:11:40.762 QEMU NVMe Ctrl (12342 ): 27545 I/Os completed (+2350) 00:11:40.762 QEMU NVMe Ctrl (12340 ): 14030 I/Os completed (+2348) 00:11:40.762 QEMU NVMe Ctrl (12341 ): 13774 I/Os completed (+2351) 00:11:40.762 00:11:41.698 QEMU NVMe Ctrl (12343 ): 29729 I/Os completed (+2340) 00:11:41.698 QEMU NVMe Ctrl (12342 ): 29887 I/Os completed (+2342) 00:11:41.698 QEMU NVMe Ctrl (12340 ): 16373 I/Os completed (+2343) 00:11:41.698 QEMU NVMe Ctrl (12341 ): 16114 I/Os completed (+2340) 00:11:41.698 00:11:42.634 QEMU NVMe Ctrl (12343 ): 32057 I/Os completed (+2328) 00:11:42.634 QEMU NVMe Ctrl (12342 ): 32221 I/Os completed (+2334) 00:11:42.634 QEMU NVMe Ctrl (12340 ): 18701 I/Os completed (+2328) 00:11:42.634 QEMU NVMe Ctrl (12341 ): 18442 I/Os completed (+2328) 00:11:42.634 00:11:43.569 QEMU NVMe Ctrl (12343 ): 34377 I/Os completed (+2320) 00:11:43.569 QEMU NVMe Ctrl (12342 ): 34541 I/Os completed (+2320) 00:11:43.569 QEMU NVMe Ctrl (12340 ): 21024 I/Os completed (+2323) 00:11:43.569 QEMU NVMe Ctrl (12341 ): 20764 I/Os completed (+2322) 00:11:43.569 00:11:44.506 QEMU NVMe Ctrl (12343 ): 36705 I/Os completed (+2328) 00:11:44.506 QEMU NVMe Ctrl (12342 ): 36875 I/Os completed (+2334) 00:11:44.506 QEMU NVMe Ctrl (12340 ): 23366 I/Os completed (+2342) 00:11:44.506 QEMU NVMe Ctrl (12341 ): 23104 I/Os completed (+2340) 00:11:44.506 00:11:45.450 QEMU NVMe Ctrl (12343 ): 39005 I/Os completed (+2300) 00:11:45.450 QEMU NVMe Ctrl (12342 ): 39177 I/Os completed (+2302) 00:11:45.450 QEMU NVMe Ctrl (12340 ): 25668 I/Os completed (+2302) 00:11:45.450 QEMU NVMe Ctrl (12341 ): 25407 I/Os completed (+2303) 00:11:45.450 00:11:46.399 QEMU NVMe Ctrl (12343 ): 41389 I/Os completed (+2384) 00:11:46.399 QEMU NVMe Ctrl (12342 ): 41563 I/Os completed (+2386) 00:11:46.399 QEMU NVMe Ctrl (12340 ): 28052 I/Os completed (+2384) 00:11:46.399 QEMU NVMe Ctrl (12341 ): 27793 I/Os completed (+2386) 00:11:46.399 00:11:46.657 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:11:46.657 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:11:46.657 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:46.657 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:46.657 [2024-07-21 11:53:45.349114] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:46.657 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:46.657 [2024-07-21 11:53:45.350780] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 [2024-07-21 11:53:45.350843] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 [2024-07-21 11:53:45.350876] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 [2024-07-21 11:53:45.350906] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:46.657 [2024-07-21 11:53:45.352659] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 [2024-07-21 11:53:45.352698] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 [2024-07-21 11:53:45.352712] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 [2024-07-21 11:53:45.352730] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:46.657 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:46.657 [2024-07-21 11:53:45.384539] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:46.657 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:46.657 [2024-07-21 11:53:45.385975] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 [2024-07-21 11:53:45.386015] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 [2024-07-21 11:53:45.386035] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 [2024-07-21 11:53:45.386050] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:46.657 [2024-07-21 11:53:45.387348] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 [2024-07-21 11:53:45.387384] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 [2024-07-21 11:53:45.387404] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 [2024-07-21 11:53:45.387419] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:46.657 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:11:46.657 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:11:46.657 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:46.657 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:46.657 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:11:46.914 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:11:46.915 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:46.915 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:46.915 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:46.915 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:11:46.915 Attaching to 0000:00:10.0 00:11:46.915 Attached to 0000:00:10.0 00:11:46.915 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:11:46.915 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:46.915 11:53:45 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:11:46.915 Attaching to 0000:00:11.0 00:11:46.915 Attached to 0000:00:11.0 00:11:47.481 QEMU NVMe Ctrl (12343 ): 43693 I/Os completed (+2304) 00:11:47.481 QEMU NVMe Ctrl (12342 ): 43908 I/Os completed (+2345) 00:11:47.481 QEMU NVMe Ctrl (12340 ): 1432 I/Os completed (+1432) 00:11:47.481 QEMU NVMe Ctrl (12341 ): 1192 I/Os completed (+1192) 00:11:47.481 00:11:48.418 QEMU NVMe Ctrl (12343 ): 45926 I/Os completed (+2233) 00:11:48.418 QEMU NVMe Ctrl (12342 ): 46154 I/Os completed (+2246) 00:11:48.418 QEMU NVMe Ctrl (12340 ): 3813 I/Os completed (+2381) 00:11:48.418 QEMU NVMe Ctrl (12341 ): 3567 I/Os completed (+2375) 00:11:48.418 00:11:49.355 QEMU NVMe Ctrl (12343 ): 48150 I/Os completed (+2224) 00:11:49.355 QEMU NVMe Ctrl (12342 ): 48394 I/Os completed (+2240) 00:11:49.355 QEMU NVMe Ctrl (12340 ): 6089 I/Os completed (+2276) 00:11:49.355 QEMU NVMe Ctrl (12341 ): 5933 I/Os completed (+2366) 00:11:49.355 00:11:50.732 QEMU NVMe Ctrl (12343 ): 50621 I/Os completed (+2471) 00:11:50.732 QEMU NVMe Ctrl (12342 ): 50864 I/Os completed (+2470) 00:11:50.732 QEMU NVMe Ctrl (12340 ): 8574 I/Os completed (+2485) 00:11:50.732 QEMU NVMe Ctrl (12341 ): 8428 I/Os completed (+2495) 00:11:50.732 00:11:51.669 QEMU NVMe Ctrl (12343 ): 53167 I/Os completed (+2546) 00:11:51.669 QEMU NVMe Ctrl (12342 ): 53776 I/Os completed (+2912) 00:11:51.670 QEMU NVMe Ctrl (12340 ): 11492 I/Os completed (+2918) 00:11:51.670 QEMU NVMe Ctrl (12341 ): 11308 I/Os completed (+2880) 00:11:51.670 00:11:52.624 QEMU NVMe Ctrl (12343 ): 55763 I/Os completed (+2596) 00:11:52.624 QEMU NVMe Ctrl (12342 ): 56731 I/Os completed (+2955) 00:11:52.624 QEMU NVMe Ctrl (12340 ): 14314 I/Os completed (+2822) 00:11:52.624 QEMU NVMe Ctrl (12341 ): 14113 I/Os completed (+2805) 00:11:52.624 00:11:53.560 QEMU NVMe Ctrl (12343 ): 58203 I/Os completed (+2440) 00:11:53.560 QEMU NVMe Ctrl (12342 ): 59175 I/Os completed (+2444) 00:11:53.560 QEMU NVMe Ctrl (12340 ): 16754 I/Os completed (+2440) 00:11:53.560 QEMU NVMe Ctrl (12341 ): 16566 I/Os completed (+2453) 00:11:53.560 00:11:54.511 QEMU NVMe Ctrl (12343 ): 60383 I/Os completed (+2180) 00:11:54.511 QEMU NVMe Ctrl (12342 ): 61429 I/Os completed (+2254) 00:11:54.511 QEMU NVMe Ctrl (12340 ): 18967 I/Os completed (+2213) 00:11:54.511 QEMU NVMe Ctrl (12341 ): 18811 I/Os completed (+2245) 00:11:54.511 00:11:55.446 QEMU NVMe Ctrl (12343 ): 62643 I/Os completed (+2260) 00:11:55.446 QEMU NVMe Ctrl (12342 ): 63692 I/Os completed (+2263) 00:11:55.446 QEMU NVMe Ctrl (12340 ): 21238 I/Os completed (+2271) 00:11:55.446 QEMU NVMe Ctrl (12341 ): 21073 I/Os completed (+2262) 00:11:55.446 00:11:56.383 QEMU NVMe Ctrl (12343 ): 65047 I/Os completed (+2404) 00:11:56.383 QEMU NVMe Ctrl (12342 ): 66102 I/Os completed (+2410) 00:11:56.383 QEMU NVMe Ctrl (12340 ): 23650 I/Os completed (+2412) 00:11:56.383 QEMU NVMe Ctrl (12341 ): 23484 I/Os completed (+2411) 00:11:56.383 00:11:57.319 QEMU NVMe Ctrl (12343 ): 67476 I/Os completed (+2429) 00:11:57.319 QEMU NVMe Ctrl (12342 ): 68528 I/Os completed (+2426) 00:11:57.319 QEMU NVMe Ctrl (12340 ): 26085 I/Os completed (+2435) 00:11:57.319 QEMU NVMe Ctrl (12341 ): 25912 I/Os completed (+2428) 00:11:57.319 00:11:58.698 QEMU NVMe Ctrl (12343 ): 69844 I/Os completed (+2368) 00:11:58.698 QEMU NVMe Ctrl (12342 ): 70899 I/Os completed (+2371) 00:11:58.698 QEMU NVMe Ctrl (12340 ): 28461 I/Os completed (+2376) 00:11:58.698 QEMU NVMe Ctrl (12341 ): 28283 I/Os completed (+2371) 00:11:58.698 00:11:58.970 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:11:58.970 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:11:58.970 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:58.970 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:58.970 [2024-07-21 11:53:57.674833] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:58.970 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:58.970 [2024-07-21 11:53:57.676209] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 [2024-07-21 11:53:57.676260] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 [2024-07-21 11:53:57.676277] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 [2024-07-21 11:53:57.676299] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:58.970 [2024-07-21 11:53:57.677812] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 [2024-07-21 11:53:57.677867] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 [2024-07-21 11:53:57.677883] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 [2024-07-21 11:53:57.677899] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:11:58.970 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:11:58.970 [2024-07-21 11:53:57.713108] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:58.970 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:58.970 [2024-07-21 11:53:57.714428] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 [2024-07-21 11:53:57.714475] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 [2024-07-21 11:53:57.714495] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 [2024-07-21 11:53:57.714528] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:58.970 [2024-07-21 11:53:57.716435] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 [2024-07-21 11:53:57.716479] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 [2024-07-21 11:53:57.716497] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 [2024-07-21 11:53:57.716511] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:58.970 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # false 00:11:58.970 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@44 -- # echo 1 00:11:58.970 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:58.970 EAL: Scan for (pci) bus failed. 00:11:58.970 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:58.970 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:58.970 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:10.0 00:11:59.229 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:10.0 00:11:59.229 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:59.229 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@46 -- # for dev in "${nvmes[@]}" 00:11:59.229 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@47 -- # echo uio_pci_generic 00:11:59.229 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@48 -- # echo 0000:00:11.0 00:11:59.229 Attaching to 0000:00:10.0 00:11:59.229 Attached to 0000:00:10.0 00:11:59.229 11:53:57 sw_hotplug -- nvme/sw_hotplug.sh@49 -- # echo 0000:00:11.0 00:11:59.229 11:53:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # echo '' 00:11:59.229 11:53:58 sw_hotplug -- nvme/sw_hotplug.sh@54 -- # sleep 12 00:11:59.229 Attaching to 0000:00:11.0 00:11:59.229 Attached to 0000:00:11.0 00:11:59.229 unregister_dev: QEMU NVMe Ctrl (12343 ) 00:11:59.229 unregister_dev: QEMU NVMe Ctrl (12342 ) 00:11:59.229 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:59.229 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:59.229 [2024-07-21 11:53:58.030951] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:12:11.451 11:54:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # false 00:12:11.451 11:54:10 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:11.452 11:54:10 sw_hotplug -- common/autotest_common.sh@714 -- # time=43.00 00:12:11.452 11:54:10 sw_hotplug -- common/autotest_common.sh@716 -- # echo 43.00 00:12:11.452 11:54:10 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=43.00 00:12:11.452 11:54:10 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.00 2 00:12:11.452 remove_attach_helper took 43.00s to complete (handling 2 nvme drive(s)) 11:54:10 sw_hotplug -- nvme/sw_hotplug.sh@79 -- # sleep 6 00:12:18.020 11:54:16 sw_hotplug -- nvme/sw_hotplug.sh@81 -- # kill -0 84167 00:12:18.020 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 81: kill: (84167) - No such process 00:12:18.020 11:54:16 sw_hotplug -- nvme/sw_hotplug.sh@83 -- # wait 84167 00:12:18.020 11:54:16 sw_hotplug -- nvme/sw_hotplug.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:18.020 11:54:16 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # tgt_run_hotplug 00:12:18.020 11:54:16 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # local dev 00:12:18.020 11:54:16 sw_hotplug -- nvme/sw_hotplug.sh@98 -- # spdk_tgt_pid=84715 00:12:18.020 11:54:16 sw_hotplug -- nvme/sw_hotplug.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:18.020 11:54:16 sw_hotplug -- nvme/sw_hotplug.sh@100 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:18.020 11:54:16 sw_hotplug -- nvme/sw_hotplug.sh@101 -- # waitforlisten 84715 00:12:18.020 11:54:16 sw_hotplug -- common/autotest_common.sh@827 -- # '[' -z 84715 ']' 00:12:18.020 11:54:16 sw_hotplug -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.020 11:54:16 sw_hotplug -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:18.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.020 11:54:16 sw_hotplug -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.020 11:54:16 sw_hotplug -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:18.020 11:54:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.020 [2024-07-21 11:54:16.125081] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:18.020 [2024-07-21 11:54:16.125196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84715 ] 00:12:18.020 [2024-07-21 11:54:16.286578] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.020 [2024-07-21 11:54:16.330441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@860 -- # return 0 00:12:18.279 11:54:16 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:12:18.279 11:54:16 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme00 -t PCIe -a 0000:00:10.0 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.279 Nvme00n1 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.279 11:54:16 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme00n1 6 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme00n1 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme00n1 -t 6 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.279 [ 00:12:18.279 { 00:12:18.279 "name": "Nvme00n1", 00:12:18.279 "aliases": [ 00:12:18.279 "5c459a58-d38b-47ee-aaf3-b02974292ef0" 00:12:18.279 ], 00:12:18.279 "product_name": "NVMe disk", 00:12:18.279 "block_size": 4096, 00:12:18.279 "num_blocks": 1548666, 00:12:18.279 "uuid": "5c459a58-d38b-47ee-aaf3-b02974292ef0", 00:12:18.279 "md_size": 64, 00:12:18.279 "md_interleave": false, 00:12:18.279 "dif_type": 0, 00:12:18.279 "assigned_rate_limits": { 00:12:18.279 "rw_ios_per_sec": 0, 00:12:18.279 "rw_mbytes_per_sec": 0, 00:12:18.279 "r_mbytes_per_sec": 0, 00:12:18.279 "w_mbytes_per_sec": 0 00:12:18.279 }, 00:12:18.279 "claimed": false, 00:12:18.279 "zoned": false, 00:12:18.279 "supported_io_types": { 00:12:18.279 "read": true, 00:12:18.279 "write": true, 00:12:18.279 "unmap": true, 00:12:18.279 "write_zeroes": true, 00:12:18.279 "flush": true, 00:12:18.279 "reset": true, 00:12:18.279 "compare": true, 00:12:18.279 "compare_and_write": false, 00:12:18.279 "abort": true, 00:12:18.279 "nvme_admin": true, 00:12:18.279 "nvme_io": true 00:12:18.279 }, 00:12:18.279 "driver_specific": { 00:12:18.279 "nvme": [ 00:12:18.279 { 00:12:18.279 "pci_address": "0000:00:10.0", 00:12:18.279 "trid": { 00:12:18.279 "trtype": "PCIe", 00:12:18.279 "traddr": "0000:00:10.0" 00:12:18.279 }, 00:12:18.279 "ctrlr_data": { 00:12:18.279 "cntlid": 0, 00:12:18.279 "vendor_id": "0x1b36", 00:12:18.279 "model_number": "QEMU NVMe Ctrl", 00:12:18.279 "serial_number": "12340", 00:12:18.279 "firmware_revision": "8.0.0", 00:12:18.279 "subnqn": "nqn.2019-08.org.qemu:12340", 00:12:18.279 "oacs": { 00:12:18.279 "security": 0, 00:12:18.279 "format": 1, 00:12:18.279 "firmware": 0, 00:12:18.279 "ns_manage": 1 00:12:18.279 }, 00:12:18.279 "multi_ctrlr": false, 00:12:18.279 "ana_reporting": false 00:12:18.279 }, 00:12:18.279 "vs": { 00:12:18.279 "nvme_version": "1.4" 00:12:18.279 }, 00:12:18.279 "ns_data": { 00:12:18.279 "id": 1, 00:12:18.279 "can_share": false 00:12:18.279 } 00:12:18.279 } 00:12:18.279 ], 00:12:18.279 "mp_policy": "active_passive" 00:12:18.279 } 00:12:18.279 } 00:12:18.279 ] 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:12:18.279 11:54:16 sw_hotplug -- nvme/sw_hotplug.sh@103 -- # for dev in "${!nvmes[@]}" 00:12:18.279 11:54:16 sw_hotplug -- nvme/sw_hotplug.sh@104 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme01 -t PCIe -a 0000:00:11.0 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.279 11:54:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.279 Nvme01n1 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.279 11:54:17 sw_hotplug -- nvme/sw_hotplug.sh@105 -- # waitforbdev Nvme01n1 6 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@895 -- # local bdev_name=Nvme01n1 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@896 -- # local bdev_timeout=6 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@897 -- # local i 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@898 -- # [[ -z 6 ]] 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@902 -- # rpc_cmd bdev_get_bdevs -b Nvme01n1 -t 6 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.279 [ 00:12:18.279 { 00:12:18.279 "name": "Nvme01n1", 00:12:18.279 "aliases": [ 00:12:18.279 "c58fb5f5-7daf-4070-ae8a-0d891e7d0d1e" 00:12:18.279 ], 00:12:18.279 "product_name": "NVMe disk", 00:12:18.279 "block_size": 4096, 00:12:18.279 "num_blocks": 1310720, 00:12:18.279 "uuid": "c58fb5f5-7daf-4070-ae8a-0d891e7d0d1e", 00:12:18.279 "assigned_rate_limits": { 00:12:18.279 "rw_ios_per_sec": 0, 00:12:18.279 "rw_mbytes_per_sec": 0, 00:12:18.279 "r_mbytes_per_sec": 0, 00:12:18.279 "w_mbytes_per_sec": 0 00:12:18.279 }, 00:12:18.279 "claimed": false, 00:12:18.279 "zoned": false, 00:12:18.279 "supported_io_types": { 00:12:18.279 "read": true, 00:12:18.279 "write": true, 00:12:18.279 "unmap": true, 00:12:18.279 "write_zeroes": true, 00:12:18.279 "flush": true, 00:12:18.279 "reset": true, 00:12:18.279 "compare": true, 00:12:18.279 "compare_and_write": false, 00:12:18.279 "abort": true, 00:12:18.279 "nvme_admin": true, 00:12:18.279 "nvme_io": true 00:12:18.279 }, 00:12:18.279 "driver_specific": { 00:12:18.279 "nvme": [ 00:12:18.279 { 00:12:18.279 "pci_address": "0000:00:11.0", 00:12:18.279 "trid": { 00:12:18.279 "trtype": "PCIe", 00:12:18.279 "traddr": "0000:00:11.0" 00:12:18.279 }, 00:12:18.279 "ctrlr_data": { 00:12:18.279 "cntlid": 0, 00:12:18.279 "vendor_id": "0x1b36", 00:12:18.279 "model_number": "QEMU NVMe Ctrl", 00:12:18.279 "serial_number": "12341", 00:12:18.279 "firmware_revision": "8.0.0", 00:12:18.279 "subnqn": "nqn.2019-08.org.qemu:12341", 00:12:18.279 "oacs": { 00:12:18.279 "security": 0, 00:12:18.279 "format": 1, 00:12:18.279 "firmware": 0, 00:12:18.279 "ns_manage": 1 00:12:18.279 }, 00:12:18.279 "multi_ctrlr": false, 00:12:18.279 "ana_reporting": false 00:12:18.279 }, 00:12:18.279 "vs": { 00:12:18.279 "nvme_version": "1.4" 00:12:18.279 }, 00:12:18.279 "ns_data": { 00:12:18.279 "id": 1, 00:12:18.279 "can_share": false 00:12:18.279 } 00:12:18.279 } 00:12:18.279 ], 00:12:18.279 "mp_policy": "active_passive" 00:12:18.279 } 00:12:18.279 } 00:12:18.279 ] 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@903 -- # return 0 00:12:18.279 11:54:17 sw_hotplug -- nvme/sw_hotplug.sh@108 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.279 11:54:17 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # debug_remove_attach_helper 3 6 true 00:12:18.279 11:54:17 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:12:18.279 11:54:17 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:12:18.279 11:54:17 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:12:18.279 11:54:17 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:12:18.279 11:54:17 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:12:18.279 11:54:17 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:12:18.279 11:54:17 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:12:18.279 11:54:17 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:12:24.885 11:54:23 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:24.885 11:54:23 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:24.885 11:54:23 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:24.885 11:54:23 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:24.885 11:54:23 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:24.885 11:54:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:12:24.885 11:54:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:12:24.885 [2024-07-21 11:54:23.184727] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:24.885 [2024-07-21 11:54:23.186271] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.885 [2024-07-21 11:54:23.186315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.885 [2024-07-21 11:54:23.186331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.885 [2024-07-21 11:54:23.186352] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.885 [2024-07-21 11:54:23.186362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.885 [2024-07-21 11:54:23.186372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.885 [2024-07-21 11:54:23.186380] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.885 [2024-07-21 11:54:23.186394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.885 [2024-07-21 11:54:23.186402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.885 [2024-07-21 11:54:23.186422] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.885 [2024-07-21 11:54:23.186430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.885 [2024-07-21 11:54:23.186440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.885 [2024-07-21 11:54:23.583979] nvme_ctrlr.c:1043:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:24.885 [2024-07-21 11:54:23.585586] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.885 [2024-07-21 11:54:23.585624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.885 [2024-07-21 11:54:23.585639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.885 [2024-07-21 11:54:23.585656] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.885 [2024-07-21 11:54:23.585667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.885 [2024-07-21 11:54:23.585677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.885 [2024-07-21 11:54:23.585687] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.885 [2024-07-21 11:54:23.585695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.885 [2024-07-21 11:54:23.585705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.885 [2024-07-21 11:54:23.585713] nvme_pcie_common.c: 745:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.885 [2024-07-21 11:54:23.585726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.885 [2024-07-21 11:54:23.585734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.11 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.11 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.11 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.11 2 00:12:31.448 remove_attach_helper took 12.11s to complete (handling 2 nvme drive(s)) 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # debug_remove_attach_helper 3 6 true 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@14 -- # local helper_time=0 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # timing_cmd remove_attach_helper 3 6 true 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@706 -- # [[ -t 0 ]] 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@706 -- # exec 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@708 -- # local time=0 TIMEFORMAT=%2R 00:12:31.448 11:54:29 sw_hotplug -- common/autotest_common.sh@714 -- # remove_attach_helper 3 6 true 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # local hotplug_events=3 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@23 -- # local hotplug_wait=6 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@24 -- # local use_bdev=true 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@25 -- # local dev bdfs 00:12:31.448 11:54:29 sw_hotplug -- nvme/sw_hotplug.sh@31 -- # sleep 6 00:12:36.725 11:54:35 sw_hotplug -- nvme/sw_hotplug.sh@33 -- # (( hotplug_events-- )) 00:12:36.725 11:54:35 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:36.725 11:54:35 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:36.725 11:54:35 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # trap - ERR 00:12:36.725 11:54:35 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # print_backtrace 00:12:36.725 11:54:35 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:36.725 11:54:35 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:36.725 11:54:35 sw_hotplug -- nvme/sw_hotplug.sh@34 -- # for dev in "${nvmes[@]}" 00:12:36.725 11:54:35 sw_hotplug -- nvme/sw_hotplug.sh@35 -- # echo 1 00:12:36.725 11:54:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # true 00:12:36.725 11:54:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # sleep 6 00:12:43.298 11:54:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # rpc_cmd bdev_get_bdevs 00:12:43.298 11:54:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # jq length 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:43.298 11:54:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # (( 4 == 0 )) 00:12:43.298 11:54:41 sw_hotplug -- nvme/sw_hotplug.sh@41 -- # return 1 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@714 -- # time=12.06 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@714 -- # trap - ERR 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@714 -- # print_backtrace 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@1149 -- # return 0 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@716 -- # echo 12.06 00:12:43.298 11:54:41 sw_hotplug -- nvme/sw_hotplug.sh@16 -- # helper_time=12.06 00:12:43.298 11:54:41 sw_hotplug -- nvme/sw_hotplug.sh@17 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 12.06 2 00:12:43.298 remove_attach_helper took 12.06s to complete (handling 2 nvme drive(s)) 11:54:41 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:12:43.298 11:54:41 sw_hotplug -- nvme/sw_hotplug.sh@118 -- # killprocess 84715 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@946 -- # '[' -z 84715 ']' 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@950 -- # kill -0 84715 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@951 -- # uname 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 84715 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:43.298 killing process with pid 84715 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@964 -- # echo 'killing process with pid 84715' 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@965 -- # kill 84715 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@970 -- # wait 84715 00:12:43.298 00:12:43.298 real 1m15.795s 00:12:43.298 user 0m45.456s 00:12:43.298 sys 0m13.306s 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:43.298 ************************************ 00:12:43.298 END TEST sw_hotplug 00:12:43.298 ************************************ 00:12:43.298 11:54:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:43.298 11:54:41 -- spdk/autotest.sh@247 -- # [[ 1 -eq 1 ]] 00:12:43.298 11:54:41 -- spdk/autotest.sh@248 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:43.298 11:54:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:43.298 11:54:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:43.298 11:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:43.298 ************************************ 00:12:43.298 START TEST nvme_xnvme 00:12:43.299 ************************************ 00:12:43.299 11:54:41 nvme_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:43.299 * Looking for test storage... 00:12:43.299 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:43.299 11:54:41 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:43.299 11:54:41 nvme_xnvme -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.299 11:54:41 nvme_xnvme -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.299 11:54:41 nvme_xnvme -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.299 11:54:41 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.299 11:54:41 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.299 11:54:41 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.299 11:54:41 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:43.299 11:54:41 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.299 11:54:41 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:43.299 11:54:41 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:12:43.299 11:54:41 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:43.299 11:54:41 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:43.299 ************************************ 00:12:43.299 START TEST xnvme_to_malloc_dd_copy 00:12:43.299 ************************************ 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1121 -- # malloc_to_xnvme_copy 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # return 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:43.299 11:54:41 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:43.299 { 00:12:43.299 "subsystems": [ 00:12:43.299 { 00:12:43.299 "subsystem": "bdev", 00:12:43.299 "config": [ 00:12:43.299 { 00:12:43.299 "params": { 00:12:43.299 "block_size": 512, 00:12:43.299 "num_blocks": 2097152, 00:12:43.299 "name": "malloc0" 00:12:43.299 }, 00:12:43.299 "method": "bdev_malloc_create" 00:12:43.299 }, 00:12:43.299 { 00:12:43.299 "params": { 00:12:43.299 "io_mechanism": "libaio", 00:12:43.299 "filename": "/dev/nullb0", 00:12:43.299 "name": "null0" 00:12:43.299 }, 00:12:43.299 "method": "bdev_xnvme_create" 00:12:43.299 }, 00:12:43.299 { 00:12:43.299 "method": "bdev_wait_for_examine" 00:12:43.299 } 00:12:43.299 ] 00:12:43.299 } 00:12:43.299 ] 00:12:43.299 } 00:12:43.299 [2024-07-21 11:54:42.030962] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:43.299 [2024-07-21 11:54:42.031126] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85062 ] 00:12:43.558 [2024-07-21 11:54:42.189796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.558 [2024-07-21 11:54:42.236230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.002  Copying: 274/1024 [MB] (274 MBps) Copying: 550/1024 [MB] (275 MBps) Copying: 823/1024 [MB] (273 MBps) Copying: 1024/1024 [MB] (average 274 MBps) 00:12:48.002 00:12:48.002 11:54:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:48.002 11:54:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:48.002 11:54:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:48.002 11:54:46 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:48.260 { 00:12:48.260 "subsystems": [ 00:12:48.260 { 00:12:48.260 "subsystem": "bdev", 00:12:48.260 "config": [ 00:12:48.260 { 00:12:48.260 "params": { 00:12:48.260 "block_size": 512, 00:12:48.260 "num_blocks": 2097152, 00:12:48.260 "name": "malloc0" 00:12:48.260 }, 00:12:48.260 "method": "bdev_malloc_create" 00:12:48.260 }, 00:12:48.260 { 00:12:48.260 "params": { 00:12:48.260 "io_mechanism": "libaio", 00:12:48.260 "filename": "/dev/nullb0", 00:12:48.260 "name": "null0" 00:12:48.261 }, 00:12:48.261 "method": "bdev_xnvme_create" 00:12:48.261 }, 00:12:48.261 { 00:12:48.261 "method": "bdev_wait_for_examine" 00:12:48.261 } 00:12:48.261 ] 00:12:48.261 } 00:12:48.261 ] 00:12:48.261 } 00:12:48.261 [2024-07-21 11:54:46.895666] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:48.261 [2024-07-21 11:54:46.895897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85123 ] 00:12:48.261 [2024-07-21 11:54:47.064907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:48.261 [2024-07-21 11:54:47.108655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.775  Copying: 281/1024 [MB] (281 MBps) Copying: 565/1024 [MB] (283 MBps) Copying: 849/1024 [MB] (283 MBps) Copying: 1024/1024 [MB] (average 283 MBps) 00:12:52.775 00:12:52.775 11:54:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:52.775 11:54:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:52.775 11:54:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:52.775 11:54:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:52.775 11:54:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:52.775 11:54:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:52.775 { 00:12:52.775 "subsystems": [ 00:12:52.775 { 00:12:52.775 "subsystem": "bdev", 00:12:52.775 "config": [ 00:12:52.775 { 00:12:52.775 "params": { 00:12:52.775 "block_size": 512, 00:12:52.775 "num_blocks": 2097152, 00:12:52.775 "name": "malloc0" 00:12:52.775 }, 00:12:52.775 "method": "bdev_malloc_create" 00:12:52.775 }, 00:12:52.775 { 00:12:52.775 "params": { 00:12:52.775 "io_mechanism": "io_uring", 00:12:52.775 "filename": "/dev/nullb0", 00:12:52.775 "name": "null0" 00:12:52.775 }, 00:12:52.775 "method": "bdev_xnvme_create" 00:12:52.775 }, 00:12:52.775 { 00:12:52.775 "method": "bdev_wait_for_examine" 00:12:52.775 } 00:12:52.775 ] 00:12:52.775 } 00:12:52.775 ] 00:12:52.775 } 00:12:53.034 [2024-07-21 11:54:51.647675] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:53.034 [2024-07-21 11:54:51.647774] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85188 ] 00:12:53.034 [2024-07-21 11:54:51.805154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.034 [2024-07-21 11:54:51.848886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.419  Copying: 286/1024 [MB] (286 MBps) Copying: 573/1024 [MB] (287 MBps) Copying: 860/1024 [MB] (287 MBps) Copying: 1024/1024 [MB] (average 286 MBps) 00:12:57.419 00:12:57.419 11:54:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:57.419 11:54:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:57.419 11:54:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:57.419 11:54:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:57.677 { 00:12:57.677 "subsystems": [ 00:12:57.677 { 00:12:57.677 "subsystem": "bdev", 00:12:57.677 "config": [ 00:12:57.677 { 00:12:57.677 "params": { 00:12:57.677 "block_size": 512, 00:12:57.677 "num_blocks": 2097152, 00:12:57.677 "name": "malloc0" 00:12:57.677 }, 00:12:57.677 "method": "bdev_malloc_create" 00:12:57.677 }, 00:12:57.677 { 00:12:57.677 "params": { 00:12:57.677 "io_mechanism": "io_uring", 00:12:57.677 "filename": "/dev/nullb0", 00:12:57.677 "name": "null0" 00:12:57.677 }, 00:12:57.677 "method": "bdev_xnvme_create" 00:12:57.677 }, 00:12:57.677 { 00:12:57.677 "method": "bdev_wait_for_examine" 00:12:57.677 } 00:12:57.677 ] 00:12:57.677 } 00:12:57.677 ] 00:12:57.677 } 00:12:57.677 [2024-07-21 11:54:56.316929] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:12:57.678 [2024-07-21 11:54:56.317090] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85242 ] 00:12:57.678 [2024-07-21 11:54:56.474792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.678 [2024-07-21 11:54:56.522696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.208  Copying: 294/1024 [MB] (294 MBps) Copying: 591/1024 [MB] (296 MBps) Copying: 891/1024 [MB] (300 MBps) Copying: 1024/1024 [MB] (average 298 MBps) 00:13:02.208 00:13:02.208 11:55:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:02.208 11:55:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:02.208 00:13:02.208 real 0m18.890s 00:13:02.208 user 0m15.494s 00:13:02.208 sys 0m3.014s 00:13:02.208 11:55:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:02.208 11:55:00 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:02.208 ************************************ 00:13:02.208 END TEST xnvme_to_malloc_dd_copy 00:13:02.208 ************************************ 00:13:02.208 11:55:00 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:02.208 11:55:00 nvme_xnvme -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:13:02.208 11:55:00 nvme_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:02.208 11:55:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:02.208 ************************************ 00:13:02.208 START TEST xnvme_bdevperf 00:13:02.208 ************************************ 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1121 -- # xnvme_bdevperf 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # return 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:02.208 11:55:00 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:02.208 { 00:13:02.208 "subsystems": [ 00:13:02.208 { 00:13:02.208 "subsystem": "bdev", 00:13:02.208 "config": [ 00:13:02.208 { 00:13:02.208 "params": { 00:13:02.208 "io_mechanism": "libaio", 00:13:02.208 "filename": "/dev/nullb0", 00:13:02.208 "name": "null0" 00:13:02.208 }, 00:13:02.208 "method": "bdev_xnvme_create" 00:13:02.208 }, 00:13:02.208 { 00:13:02.208 "method": "bdev_wait_for_examine" 00:13:02.208 } 00:13:02.208 ] 00:13:02.208 } 00:13:02.208 ] 00:13:02.208 } 00:13:02.208 [2024-07-21 11:55:00.996103] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:02.208 [2024-07-21 11:55:00.996203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85326 ] 00:13:02.467 [2024-07-21 11:55:01.155188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.467 [2024-07-21 11:55:01.199758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.467 Running I/O for 5 seconds... 00:13:07.739 00:13:07.739 Latency(us) 00:13:07.739 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:07.739 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:07.739 null0 : 5.00 188474.97 736.23 0.00 0.00 337.21 117.16 758.39 00:13:07.739 =================================================================================================================== 00:13:07.739 Total : 188474.97 736.23 0.00 0.00 337.21 117.16 758.39 00:13:07.740 11:55:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:07.740 11:55:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:07.740 11:55:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:07.740 11:55:06 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:07.740 11:55:06 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:07.740 11:55:06 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:07.740 { 00:13:07.740 "subsystems": [ 00:13:07.740 { 00:13:07.740 "subsystem": "bdev", 00:13:07.740 "config": [ 00:13:07.740 { 00:13:07.740 "params": { 00:13:07.740 "io_mechanism": "io_uring", 00:13:07.740 "filename": "/dev/nullb0", 00:13:07.740 "name": "null0" 00:13:07.740 }, 00:13:07.740 "method": "bdev_xnvme_create" 00:13:07.740 }, 00:13:07.740 { 00:13:07.740 "method": "bdev_wait_for_examine" 00:13:07.740 } 00:13:07.740 ] 00:13:07.740 } 00:13:07.740 ] 00:13:07.740 } 00:13:07.740 [2024-07-21 11:55:06.593514] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:07.740 [2024-07-21 11:55:06.593666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85389 ] 00:13:07.998 [2024-07-21 11:55:06.752604] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.998 [2024-07-21 11:55:06.799706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.259 Running I/O for 5 seconds... 00:13:13.569 00:13:13.569 Latency(us) 00:13:13.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.569 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:13.569 null0 : 5.00 229944.58 898.22 0.00 0.00 276.10 163.66 375.62 00:13:13.569 =================================================================================================================== 00:13:13.569 Total : 229944.58 898.22 0.00 0.00 276.10 163.66 375.62 00:13:13.569 11:55:12 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:13.569 11:55:12 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@195 -- # modprobe -r null_blk 00:13:13.569 00:13:13.569 real 0m11.244s 00:13:13.569 user 0m8.788s 00:13:13.569 sys 0m2.253s 00:13:13.569 11:55:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:13.569 11:55:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:13.569 ************************************ 00:13:13.569 END TEST xnvme_bdevperf 00:13:13.569 ************************************ 00:13:13.569 00:13:13.569 real 0m30.382s 00:13:13.569 user 0m24.373s 00:13:13.569 sys 0m5.431s 00:13:13.569 11:55:12 nvme_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:13.569 11:55:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.569 ************************************ 00:13:13.569 END TEST nvme_xnvme 00:13:13.569 ************************************ 00:13:13.569 11:55:12 -- spdk/autotest.sh@249 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:13.569 11:55:12 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:13.569 11:55:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:13.569 11:55:12 -- common/autotest_common.sh@10 -- # set +x 00:13:13.569 ************************************ 00:13:13.569 START TEST blockdev_xnvme 00:13:13.569 ************************************ 00:13:13.569 11:55:12 blockdev_xnvme -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:13.569 * Looking for test storage... 00:13:13.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@674 -- # uname -s 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@674 -- # '[' Linux = Linux ']' 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@676 -- # PRE_RESERVED_MEM=0 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@682 -- # test_type=xnvme 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@684 -- # dek= 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:13:13.569 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == bdev ]] 00:13:13.570 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@690 -- # [[ xnvme == crypto_* ]] 00:13:13.570 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:13:13.570 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=85524 00:13:13.570 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:13.570 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:13.570 11:55:12 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 85524 00:13:13.570 11:55:12 blockdev_xnvme -- common/autotest_common.sh@827 -- # '[' -z 85524 ']' 00:13:13.570 11:55:12 blockdev_xnvme -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.570 11:55:12 blockdev_xnvme -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:13.570 11:55:12 blockdev_xnvme -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.570 11:55:12 blockdev_xnvme -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:13.570 11:55:12 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.829 [2024-07-21 11:55:12.475778] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:13.829 [2024-07-21 11:55:12.476000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85524 ] 00:13:13.829 [2024-07-21 11:55:12.637943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:13.829 [2024-07-21 11:55:12.682364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.395 11:55:13 blockdev_xnvme -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:14.395 11:55:13 blockdev_xnvme -- common/autotest_common.sh@860 -- # return 0 00:13:14.395 11:55:13 blockdev_xnvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:13:14.395 11:55:13 blockdev_xnvme -- bdev/blockdev.sh@729 -- # setup_xnvme_conf 00:13:14.395 11:55:13 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:14.395 11:55:13 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:14.395 11:55:13 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:14.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:14.913 Waiting for block devices as requested 00:13:15.172 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:15.172 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:20.452 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1666 -- # local nvme bdf 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n2 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n2 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n3 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme0n3 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1c1n1 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1c1n1 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme1n1 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme1n1 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme2n1 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme2n1 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1669 -- # is_block_zoned nvme3n1 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local device=nvme3n1 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:20.452 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:20.452 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:20.453 nvme0n1 00:13:20.453 nvme0n2 00:13:20.453 nvme0n3 00:13:20.453 nvme1n1 00:13:20.453 nvme2n1 00:13:20.453 nvme3n1 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@740 -- # cat 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c42c1078-f846-443f-9610-223a00c65f9c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c42c1078-f846-443f-9610-223a00c65f9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "9ef3f6a6-72db-42bc-a5b7-dd42ad993e79"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9ef3f6a6-72db-42bc-a5b7-dd42ad993e79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "f4064b21-000e-47df-beb0-cbbba9ee218e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f4064b21-000e-47df-beb0-cbbba9ee218e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1ab3d6c1-316d-4b04-9d03-1cfaed2dc368"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1ab3d6c1-316d-4b04-9d03-1cfaed2dc368",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "936364c6-1aba-46ee-88f1-d725f5629373"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "936364c6-1aba-46ee-88f1-d725f5629373",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "38db825b-2d71-4856-9d9a-38b7e341efcb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "38db825b-2d71-4856-9d9a-38b7e341efcb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=nvme0n1 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:13:20.453 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@754 -- # killprocess 85524 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@946 -- # '[' -z 85524 ']' 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@950 -- # kill -0 85524 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@951 -- # uname 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:20.453 11:55:19 blockdev_xnvme -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85524 00:13:20.713 killing process with pid 85524 00:13:20.713 11:55:19 blockdev_xnvme -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:20.713 11:55:19 blockdev_xnvme -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:20.713 11:55:19 blockdev_xnvme -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85524' 00:13:20.713 11:55:19 blockdev_xnvme -- common/autotest_common.sh@965 -- # kill 85524 00:13:20.713 11:55:19 blockdev_xnvme -- common/autotest_common.sh@970 -- # wait 85524 00:13:20.971 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:20.971 11:55:19 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:20.971 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:13:20.971 11:55:19 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:20.971 11:55:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.971 ************************************ 00:13:20.971 START TEST bdev_hello_world 00:13:20.971 ************************************ 00:13:20.971 11:55:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:20.971 [2024-07-21 11:55:19.773648] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:20.971 [2024-07-21 11:55:19.773750] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85786 ] 00:13:21.229 [2024-07-21 11:55:19.934147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.229 [2024-07-21 11:55:19.977963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.488 [2024-07-21 11:55:20.154093] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:21.488 [2024-07-21 11:55:20.154142] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:21.488 [2024-07-21 11:55:20.154169] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:21.488 [2024-07-21 11:55:20.156097] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:21.488 [2024-07-21 11:55:20.156516] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:21.488 [2024-07-21 11:55:20.156554] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:21.488 [2024-07-21 11:55:20.156833] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:21.488 00:13:21.488 [2024-07-21 11:55:20.156871] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:21.747 00:13:21.747 real 0m0.678s 00:13:21.747 user 0m0.376s 00:13:21.747 sys 0m0.194s 00:13:21.747 11:55:20 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:21.747 ************************************ 00:13:21.747 END TEST bdev_hello_world 00:13:21.747 ************************************ 00:13:21.747 11:55:20 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:21.747 11:55:20 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:13:21.747 11:55:20 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:21.747 11:55:20 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:21.747 11:55:20 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:21.747 ************************************ 00:13:21.747 START TEST bdev_bounds 00:13:21.747 ************************************ 00:13:21.747 11:55:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1121 -- # bdev_bounds '' 00:13:21.747 11:55:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=85817 00:13:21.747 11:55:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:21.747 11:55:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:21.747 Process bdevio pid: 85817 00:13:21.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.747 11:55:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 85817' 00:13:21.747 11:55:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 85817 00:13:21.747 11:55:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@827 -- # '[' -z 85817 ']' 00:13:21.747 11:55:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.748 11:55:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:21.748 11:55:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.748 11:55:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:21.748 11:55:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:21.748 [2024-07-21 11:55:20.523792] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:21.748 [2024-07-21 11:55:20.523943] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85817 ] 00:13:22.006 [2024-07-21 11:55:20.683104] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.006 [2024-07-21 11:55:20.728945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.007 [2024-07-21 11:55:20.728996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.007 [2024-07-21 11:55:20.729129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.574 11:55:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:22.574 11:55:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # return 0 00:13:22.574 11:55:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:22.574 I/O targets: 00:13:22.575 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:22.575 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:22.575 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:22.575 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:22.575 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:22.575 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:22.575 00:13:22.575 00:13:22.575 CUnit - A unit testing framework for C - Version 2.1-3 00:13:22.575 http://cunit.sourceforge.net/ 00:13:22.575 00:13:22.575 00:13:22.575 Suite: bdevio tests on: nvme3n1 00:13:22.575 Test: blockdev write read block ...passed 00:13:22.575 Test: blockdev write zeroes read block ...passed 00:13:22.575 Test: blockdev write zeroes read no split ...passed 00:13:22.575 Test: blockdev write zeroes read split ...passed 00:13:22.575 Test: blockdev write zeroes read split partial ...passed 00:13:22.575 Test: blockdev reset ...passed 00:13:22.575 Test: blockdev write read 8 blocks ...passed 00:13:22.575 Test: blockdev write read size > 128k ...passed 00:13:22.575 Test: blockdev write read invalid size ...passed 00:13:22.575 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.575 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.575 Test: blockdev write read max offset ...passed 00:13:22.575 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.575 Test: blockdev writev readv 8 blocks ...passed 00:13:22.575 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.575 Test: blockdev writev readv block ...passed 00:13:22.575 Test: blockdev writev readv size > 128k ...passed 00:13:22.575 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.575 Test: blockdev comparev and writev ...passed 00:13:22.575 Test: blockdev nvme passthru rw ...passed 00:13:22.575 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.575 Test: blockdev nvme admin passthru ...passed 00:13:22.575 Test: blockdev copy ...passed 00:13:22.575 Suite: bdevio tests on: nvme2n1 00:13:22.575 Test: blockdev write read block ...passed 00:13:22.575 Test: blockdev write zeroes read block ...passed 00:13:22.575 Test: blockdev write zeroes read no split ...passed 00:13:22.575 Test: blockdev write zeroes read split ...passed 00:13:22.575 Test: blockdev write zeroes read split partial ...passed 00:13:22.575 Test: blockdev reset ...passed 00:13:22.575 Test: blockdev write read 8 blocks ...passed 00:13:22.575 Test: blockdev write read size > 128k ...passed 00:13:22.575 Test: blockdev write read invalid size ...passed 00:13:22.575 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.575 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.575 Test: blockdev write read max offset ...passed 00:13:22.575 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.575 Test: blockdev writev readv 8 blocks ...passed 00:13:22.575 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.575 Test: blockdev writev readv block ...passed 00:13:22.575 Test: blockdev writev readv size > 128k ...passed 00:13:22.575 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.575 Test: blockdev comparev and writev ...passed 00:13:22.575 Test: blockdev nvme passthru rw ...passed 00:13:22.575 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.575 Test: blockdev nvme admin passthru ...passed 00:13:22.575 Test: blockdev copy ...passed 00:13:22.575 Suite: bdevio tests on: nvme1n1 00:13:22.575 Test: blockdev write read block ...passed 00:13:22.575 Test: blockdev write zeroes read block ...passed 00:13:22.835 Test: blockdev write zeroes read no split ...passed 00:13:22.835 Test: blockdev write zeroes read split ...passed 00:13:22.835 Test: blockdev write zeroes read split partial ...passed 00:13:22.835 Test: blockdev reset ...passed 00:13:22.835 Test: blockdev write read 8 blocks ...passed 00:13:22.835 Test: blockdev write read size > 128k ...passed 00:13:22.835 Test: blockdev write read invalid size ...passed 00:13:22.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.835 Test: blockdev write read max offset ...passed 00:13:22.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.835 Test: blockdev writev readv 8 blocks ...passed 00:13:22.835 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.835 Test: blockdev writev readv block ...passed 00:13:22.835 Test: blockdev writev readv size > 128k ...passed 00:13:22.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.835 Test: blockdev comparev and writev ...passed 00:13:22.835 Test: blockdev nvme passthru rw ...passed 00:13:22.835 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.835 Test: blockdev nvme admin passthru ...passed 00:13:22.835 Test: blockdev copy ...passed 00:13:22.835 Suite: bdevio tests on: nvme0n3 00:13:22.835 Test: blockdev write read block ...passed 00:13:22.835 Test: blockdev write zeroes read block ...passed 00:13:22.835 Test: blockdev write zeroes read no split ...passed 00:13:22.835 Test: blockdev write zeroes read split ...passed 00:13:22.835 Test: blockdev write zeroes read split partial ...passed 00:13:22.835 Test: blockdev reset ...passed 00:13:22.835 Test: blockdev write read 8 blocks ...passed 00:13:22.835 Test: blockdev write read size > 128k ...passed 00:13:22.835 Test: blockdev write read invalid size ...passed 00:13:22.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.835 Test: blockdev write read max offset ...passed 00:13:22.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.835 Test: blockdev writev readv 8 blocks ...passed 00:13:22.835 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.835 Test: blockdev writev readv block ...passed 00:13:22.835 Test: blockdev writev readv size > 128k ...passed 00:13:22.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.835 Test: blockdev comparev and writev ...passed 00:13:22.835 Test: blockdev nvme passthru rw ...passed 00:13:22.835 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.835 Test: blockdev nvme admin passthru ...passed 00:13:22.835 Test: blockdev copy ...passed 00:13:22.835 Suite: bdevio tests on: nvme0n2 00:13:22.835 Test: blockdev write read block ...passed 00:13:22.835 Test: blockdev write zeroes read block ...passed 00:13:22.835 Test: blockdev write zeroes read no split ...passed 00:13:22.835 Test: blockdev write zeroes read split ...passed 00:13:22.835 Test: blockdev write zeroes read split partial ...passed 00:13:22.835 Test: blockdev reset ...passed 00:13:22.835 Test: blockdev write read 8 blocks ...passed 00:13:22.835 Test: blockdev write read size > 128k ...passed 00:13:22.835 Test: blockdev write read invalid size ...passed 00:13:22.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.835 Test: blockdev write read max offset ...passed 00:13:22.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.835 Test: blockdev writev readv 8 blocks ...passed 00:13:22.835 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.835 Test: blockdev writev readv block ...passed 00:13:22.835 Test: blockdev writev readv size > 128k ...passed 00:13:22.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.835 Test: blockdev comparev and writev ...passed 00:13:22.835 Test: blockdev nvme passthru rw ...passed 00:13:22.835 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.835 Test: blockdev nvme admin passthru ...passed 00:13:22.835 Test: blockdev copy ...passed 00:13:22.835 Suite: bdevio tests on: nvme0n1 00:13:22.835 Test: blockdev write read block ...passed 00:13:22.835 Test: blockdev write zeroes read block ...passed 00:13:22.835 Test: blockdev write zeroes read no split ...passed 00:13:22.835 Test: blockdev write zeroes read split ...passed 00:13:22.835 Test: blockdev write zeroes read split partial ...passed 00:13:22.835 Test: blockdev reset ...passed 00:13:22.835 Test: blockdev write read 8 blocks ...passed 00:13:22.835 Test: blockdev write read size > 128k ...passed 00:13:22.835 Test: blockdev write read invalid size ...passed 00:13:22.835 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:22.835 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:22.835 Test: blockdev write read max offset ...passed 00:13:22.835 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:22.835 Test: blockdev writev readv 8 blocks ...passed 00:13:22.835 Test: blockdev writev readv 30 x 1block ...passed 00:13:22.835 Test: blockdev writev readv block ...passed 00:13:22.835 Test: blockdev writev readv size > 128k ...passed 00:13:22.835 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:22.835 Test: blockdev comparev and writev ...passed 00:13:22.835 Test: blockdev nvme passthru rw ...passed 00:13:22.835 Test: blockdev nvme passthru vendor specific ...passed 00:13:22.835 Test: blockdev nvme admin passthru ...passed 00:13:22.835 Test: blockdev copy ...passed 00:13:22.835 00:13:22.835 Run Summary: Type Total Ran Passed Failed Inactive 00:13:22.835 suites 6 6 n/a 0 0 00:13:22.835 tests 138 138 138 0 0 00:13:22.835 asserts 780 780 780 0 n/a 00:13:22.835 00:13:22.835 Elapsed time = 0.402 seconds 00:13:22.835 0 00:13:22.835 11:55:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 85817 00:13:22.835 11:55:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@946 -- # '[' -z 85817 ']' 00:13:22.836 11:55:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # kill -0 85817 00:13:22.836 11:55:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # uname 00:13:22.836 11:55:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:22.836 11:55:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85817 00:13:22.836 11:55:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:22.836 11:55:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:22.836 11:55:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85817' 00:13:22.836 killing process with pid 85817 00:13:22.836 11:55:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@965 -- # kill 85817 00:13:22.836 11:55:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # wait 85817 00:13:23.403 11:55:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:13:23.403 00:13:23.403 real 0m1.518s 00:13:23.403 user 0m3.550s 00:13:23.403 sys 0m0.333s 00:13:23.404 11:55:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:23.404 11:55:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:23.404 ************************************ 00:13:23.404 END TEST bdev_bounds 00:13:23.404 ************************************ 00:13:23.404 11:55:22 blockdev_xnvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:13:23.404 11:55:22 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:13:23.404 11:55:22 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:23.404 11:55:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.404 ************************************ 00:13:23.404 START TEST bdev_nbd 00:13:23.404 ************************************ 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1121 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ Linux == Linux ]] 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_all 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@305 -- # local bdev_num=6 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@309 -- # [[ -e /sys/module/nbd ]] 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # local nbd_all 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@312 -- # bdev_num=6 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local nbd_list 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@315 -- # local bdev_list 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # nbd_pid=85863 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@320 -- # waitforlisten 85863 /var/tmp/spdk-nbd.sock 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@827 -- # '[' -z 85863 ']' 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:23.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:23.404 11:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:23.404 [2024-07-21 11:55:22.112518] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:23.404 [2024-07-21 11:55:22.112720] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.663 [2024-07-21 11:55:22.275141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.663 [2024-07-21 11:55:22.345987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # return 0 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.231 11:55:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:24.231 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:24.232 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:24.232 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:24.232 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:13:24.232 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:24.232 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:24.232 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:24.232 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:13:24.232 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:24.232 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:24.232 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:24.232 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.489 1+0 records in 00:13:24.489 1+0 records out 00:13:24.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740653 s, 5.5 MB/s 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.489 1+0 records in 00:13:24.489 1+0 records out 00:13:24.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604514 s, 6.8 MB/s 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.489 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd2 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd2 /proc/partitions 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.747 1+0 records in 00:13:24.747 1+0 records out 00:13:24.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062254 s, 6.6 MB/s 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.747 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd3 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd3 /proc/partitions 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.018 1+0 records in 00:13:25.018 1+0 records out 00:13:25.018 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000669663 s, 6.1 MB/s 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:25.018 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:25.276 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:25.276 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:25.276 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:25.276 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd4 00:13:25.276 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:25.276 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:25.276 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:25.276 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd4 /proc/partitions 00:13:25.276 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:25.276 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:25.276 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:25.276 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.276 1+0 records in 00:13:25.276 1+0 records out 00:13:25.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00091004 s, 4.5 MB/s 00:13:25.276 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.277 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:25.277 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.277 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:25.277 11:55:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:25.277 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:25.277 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:25.277 11:55:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd5 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd5 /proc/partitions 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:25.535 1+0 records in 00:13:25.535 1+0 records out 00:13:25.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000833326 s, 4.9 MB/s 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:25.535 { 00:13:25.535 "nbd_device": "/dev/nbd0", 00:13:25.535 "bdev_name": "nvme0n1" 00:13:25.535 }, 00:13:25.535 { 00:13:25.535 "nbd_device": "/dev/nbd1", 00:13:25.535 "bdev_name": "nvme0n2" 00:13:25.535 }, 00:13:25.535 { 00:13:25.535 "nbd_device": "/dev/nbd2", 00:13:25.535 "bdev_name": "nvme0n3" 00:13:25.535 }, 00:13:25.535 { 00:13:25.535 "nbd_device": "/dev/nbd3", 00:13:25.535 "bdev_name": "nvme1n1" 00:13:25.535 }, 00:13:25.535 { 00:13:25.535 "nbd_device": "/dev/nbd4", 00:13:25.535 "bdev_name": "nvme2n1" 00:13:25.535 }, 00:13:25.535 { 00:13:25.535 "nbd_device": "/dev/nbd5", 00:13:25.535 "bdev_name": "nvme3n1" 00:13:25.535 } 00:13:25.535 ]' 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:25.535 { 00:13:25.535 "nbd_device": "/dev/nbd0", 00:13:25.535 "bdev_name": "nvme0n1" 00:13:25.535 }, 00:13:25.535 { 00:13:25.535 "nbd_device": "/dev/nbd1", 00:13:25.535 "bdev_name": "nvme0n2" 00:13:25.535 }, 00:13:25.535 { 00:13:25.535 "nbd_device": "/dev/nbd2", 00:13:25.535 "bdev_name": "nvme0n3" 00:13:25.535 }, 00:13:25.535 { 00:13:25.535 "nbd_device": "/dev/nbd3", 00:13:25.535 "bdev_name": "nvme1n1" 00:13:25.535 }, 00:13:25.535 { 00:13:25.535 "nbd_device": "/dev/nbd4", 00:13:25.535 "bdev_name": "nvme2n1" 00:13:25.535 }, 00:13:25.535 { 00:13:25.535 "nbd_device": "/dev/nbd5", 00:13:25.535 "bdev_name": "nvme3n1" 00:13:25.535 } 00:13:25.535 ]' 00:13:25.535 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.793 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:26.050 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:26.050 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:26.050 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:26.050 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.050 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.050 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:26.050 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.050 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.050 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.050 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:26.309 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:26.309 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:26.309 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:26.309 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.309 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.309 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:26.309 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.309 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.309 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.309 11:55:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:26.309 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:26.309 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:26.309 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:26.309 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.309 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.309 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:26.309 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.309 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.309 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.309 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:26.567 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:26.567 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:26.567 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:26.567 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.567 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.567 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:26.567 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.567 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.567 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:26.567 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:26.826 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:26.826 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:26.826 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:26.826 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:26.826 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:26.826 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:26.826 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:26.826 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:26.826 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:26.826 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.826 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:27.085 /dev/nbd0 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.085 1+0 records in 00:13:27.085 1+0 records out 00:13:27.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000583821 s, 7.0 MB/s 00:13:27.085 11:55:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.345 11:55:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:27.345 11:55:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.345 11:55:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:27.345 11:55:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:27.345 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.345 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.345 11:55:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:13:27.345 /dev/nbd1 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.345 1+0 records in 00:13:27.345 1+0 records out 00:13:27.345 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542967 s, 7.5 MB/s 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.345 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:13:27.605 /dev/nbd10 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd10 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd10 /proc/partitions 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.605 1+0 records in 00:13:27.605 1+0 records out 00:13:27.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000659877 s, 6.2 MB/s 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.605 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:13:27.864 /dev/nbd11 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd11 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd11 /proc/partitions 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.864 1+0 records in 00:13:27.864 1+0 records out 00:13:27.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000762426 s, 5.4 MB/s 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.864 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:13:28.123 /dev/nbd12 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd12 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd12 /proc/partitions 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.123 1+0 records in 00:13:28.123 1+0 records out 00:13:28.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00068449 s, 6.0 MB/s 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:28.123 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:28.123 /dev/nbd13 00:13:28.382 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:28.382 11:55:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:28.382 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # local nbd_name=nbd13 00:13:28.382 11:55:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@865 -- # local i 00:13:28.382 11:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:13:28.382 11:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:13:28.382 11:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # grep -q -w nbd13 /proc/partitions 00:13:28.382 11:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # break 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@881 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:28.383 1+0 records in 00:13:28.383 1+0 records out 00:13:28.383 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000877862 s, 4.7 MB/s 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@882 -- # size=4096 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # return 0 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:28.383 { 00:13:28.383 "nbd_device": "/dev/nbd0", 00:13:28.383 "bdev_name": "nvme0n1" 00:13:28.383 }, 00:13:28.383 { 00:13:28.383 "nbd_device": "/dev/nbd1", 00:13:28.383 "bdev_name": "nvme0n2" 00:13:28.383 }, 00:13:28.383 { 00:13:28.383 "nbd_device": "/dev/nbd10", 00:13:28.383 "bdev_name": "nvme0n3" 00:13:28.383 }, 00:13:28.383 { 00:13:28.383 "nbd_device": "/dev/nbd11", 00:13:28.383 "bdev_name": "nvme1n1" 00:13:28.383 }, 00:13:28.383 { 00:13:28.383 "nbd_device": "/dev/nbd12", 00:13:28.383 "bdev_name": "nvme2n1" 00:13:28.383 }, 00:13:28.383 { 00:13:28.383 "nbd_device": "/dev/nbd13", 00:13:28.383 "bdev_name": "nvme3n1" 00:13:28.383 } 00:13:28.383 ]' 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:28.383 { 00:13:28.383 "nbd_device": "/dev/nbd0", 00:13:28.383 "bdev_name": "nvme0n1" 00:13:28.383 }, 00:13:28.383 { 00:13:28.383 "nbd_device": "/dev/nbd1", 00:13:28.383 "bdev_name": "nvme0n2" 00:13:28.383 }, 00:13:28.383 { 00:13:28.383 "nbd_device": "/dev/nbd10", 00:13:28.383 "bdev_name": "nvme0n3" 00:13:28.383 }, 00:13:28.383 { 00:13:28.383 "nbd_device": "/dev/nbd11", 00:13:28.383 "bdev_name": "nvme1n1" 00:13:28.383 }, 00:13:28.383 { 00:13:28.383 "nbd_device": "/dev/nbd12", 00:13:28.383 "bdev_name": "nvme2n1" 00:13:28.383 }, 00:13:28.383 { 00:13:28.383 "nbd_device": "/dev/nbd13", 00:13:28.383 "bdev_name": "nvme3n1" 00:13:28.383 } 00:13:28.383 ]' 00:13:28.383 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:28.643 /dev/nbd1 00:13:28.643 /dev/nbd10 00:13:28.643 /dev/nbd11 00:13:28.643 /dev/nbd12 00:13:28.643 /dev/nbd13' 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:28.643 /dev/nbd1 00:13:28.643 /dev/nbd10 00:13:28.643 /dev/nbd11 00:13:28.643 /dev/nbd12 00:13:28.643 /dev/nbd13' 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:28.643 256+0 records in 00:13:28.643 256+0 records out 00:13:28.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0120664 s, 86.9 MB/s 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:28.643 256+0 records in 00:13:28.643 256+0 records out 00:13:28.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0937858 s, 11.2 MB/s 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:28.643 256+0 records in 00:13:28.643 256+0 records out 00:13:28.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0915113 s, 11.5 MB/s 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:28.643 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:28.903 256+0 records in 00:13:28.903 256+0 records out 00:13:28.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0898457 s, 11.7 MB/s 00:13:28.903 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:28.903 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:28.903 256+0 records in 00:13:28.903 256+0 records out 00:13:28.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0914449 s, 11.5 MB/s 00:13:28.903 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:28.903 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:29.162 256+0 records in 00:13:29.162 256+0 records out 00:13:29.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.115031 s, 9.1 MB/s 00:13:29.162 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:29.162 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:29.162 256+0 records in 00:13:29.162 256+0 records out 00:13:29.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0913616 s, 11.5 MB/s 00:13:29.162 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.163 11:55:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:29.429 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:29.429 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:29.429 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:29.429 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.429 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.429 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:29.429 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:29.429 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.429 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.429 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.707 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:29.978 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:29.979 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:29.979 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:29.979 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.979 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.979 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:29.979 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:29.979 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.979 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.979 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:30.240 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:30.240 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:30.240 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:30.240 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.240 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.240 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:30.240 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.240 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.240 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.240 11:55:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:30.240 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:30.240 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:30.240 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:30.240 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.240 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.240 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:30.240 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.240 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.240 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:30.240 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.240 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@324 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd_list 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:13:30.500 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:30.760 malloc_lvol_verify 00:13:30.760 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:31.018 a2ff2ee1-70f9-4826-9df2-9026a0c04d35 00:13:31.018 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:31.018 4d1ccaf5-da63-44d0-aa9e-e5e275eaf615 00:13:31.018 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:31.277 /dev/nbd0 00:13:31.277 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:13:31.277 Discarding device blocks: 0/4096 done 00:13:31.277 mke2fs 1.46.5 (30-Dec-2021) 00:13:31.277 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:31.277 00:13:31.277 Allocating group tables: 0/1 done 00:13:31.277 Writing inode tables: 0/1 done 00:13:31.277 Creating journal (1024 blocks): done 00:13:31.277 Writing superblocks and filesystem accounting information: 0/1 done 00:13:31.277 00:13:31.277 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:13:31.277 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:31.277 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:31.277 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:31.277 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:31.277 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:31.277 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.277 11:55:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@147 -- # return 0 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # killprocess 85863 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@946 -- # '[' -z 85863 ']' 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # kill -0 85863 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # uname 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 85863 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # echo 'killing process with pid 85863' 00:13:31.536 killing process with pid 85863 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@965 -- # kill 85863 00:13:31.536 11:55:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # wait 85863 00:13:31.795 11:55:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@327 -- # trap - SIGINT SIGTERM EXIT 00:13:31.795 00:13:31.795 real 0m8.453s 00:13:31.795 user 0m11.509s 00:13:31.795 sys 0m3.515s 00:13:31.795 11:55:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:31.795 ************************************ 00:13:31.795 END TEST bdev_nbd 00:13:31.795 ************************************ 00:13:31.795 11:55:30 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:31.795 11:55:30 blockdev_xnvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:13:31.796 11:55:30 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = nvme ']' 00:13:31.796 11:55:30 blockdev_xnvme -- bdev/blockdev.sh@764 -- # '[' xnvme = gpt ']' 00:13:31.796 11:55:30 blockdev_xnvme -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:13:31.796 11:55:30 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:31.796 11:55:30 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:31.796 11:55:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.796 ************************************ 00:13:31.796 START TEST bdev_fio 00:13:31.796 ************************************ 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1121 -- # fio_test_suite '' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:31.796 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=verify 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type=AIO 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z verify ']' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' verify == verify ']' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1310 -- # cat 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1319 -- # '[' AIO == AIO ']' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # /usr/src/fio/fio --version 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1320 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1321 -- # echo serialize_overlap=1 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n1]' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n1 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n2]' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n2 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme0n3]' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme0n3 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme1n1]' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme1n1 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme2n1]' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme2n1 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_nvme3n1]' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=nvme3n1 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:31.796 ************************************ 00:13:31.796 START TEST bdev_fio_rw_verify 00:13:31.796 ************************************ 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1121 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1335 -- # local sanitizers 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # shift 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local asan_lib= 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # grep libasan 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # break 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:31.796 11:55:30 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:32.055 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:32.055 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:32.055 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:32.055 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:32.055 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:32.055 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:32.055 fio-3.35 00:13:32.055 Starting 6 threads 00:13:44.262 00:13:44.262 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=86248: Sun Jul 21 11:55:41 2024 00:13:44.262 read: IOPS=35.4k, BW=138MiB/s (145MB/s)(1383MiB/10001msec) 00:13:44.262 slat (usec): min=2, max=3958, avg= 8.68, stdev= 9.25 00:13:44.262 clat (usec): min=67, max=4711, avg=426.60, stdev=214.91 00:13:44.262 lat (usec): min=70, max=4724, avg=435.28, stdev=216.74 00:13:44.262 clat percentiles (usec): 00:13:44.262 | 50.000th=[ 388], 99.000th=[ 1029], 99.900th=[ 1467], 99.990th=[ 3785], 00:13:44.262 | 99.999th=[ 4490] 00:13:44.262 write: IOPS=35.7k, BW=139MiB/s (146MB/s)(1394MiB/10001msec); 0 zone resets 00:13:44.262 slat (usec): min=10, max=2058, avg=35.37, stdev=43.47 00:13:44.262 clat (usec): min=61, max=4812, avg=589.17, stdev=278.55 00:13:44.262 lat (usec): min=81, max=4863, avg=624.54, stdev=288.35 00:13:44.262 clat percentiles (usec): 00:13:44.262 | 50.000th=[ 553], 99.000th=[ 1401], 99.900th=[ 1860], 99.990th=[ 2704], 00:13:44.262 | 99.999th=[ 3949] 00:13:44.262 bw ( KiB/s): min=113240, max=174768, per=100.00%, avg=143178.74, stdev=2729.34, samples=114 00:13:44.262 iops : min=28310, max=43692, avg=35794.47, stdev=682.33, samples=114 00:13:44.262 lat (usec) : 100=0.01%, 250=14.84%, 500=40.75%, 750=27.84%, 1000=11.90% 00:13:44.262 lat (msec) : 2=4.60%, 4=0.06%, 10=0.01% 00:13:44.262 cpu : usr=54.17%, sys=26.64%, ctx=9353, majf=0, minf=29118 00:13:44.262 IO depths : 1=11.8%, 2=24.1%, 4=50.9%, 8=13.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:44.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.262 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.262 issued rwts: total=354056,356932,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.262 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:44.262 00:13:44.262 Run status group 0 (all jobs): 00:13:44.262 READ: bw=138MiB/s (145MB/s), 138MiB/s-138MiB/s (145MB/s-145MB/s), io=1383MiB (1450MB), run=10001-10001msec 00:13:44.262 WRITE: bw=139MiB/s (146MB/s), 139MiB/s-139MiB/s (146MB/s-146MB/s), io=1394MiB (1462MB), run=10001-10001msec 00:13:44.262 ----------------------------------------------------- 00:13:44.262 Suppressions used: 00:13:44.262 count bytes template 00:13:44.262 6 48 /usr/src/fio/parse.c 00:13:44.262 2631 252576 /usr/src/fio/iolog.c 00:13:44.262 1 8 libtcmalloc_minimal.so 00:13:44.262 1 904 libcrypto.so 00:13:44.262 ----------------------------------------------------- 00:13:44.262 00:13:44.263 00:13:44.263 real 0m11.172s 00:13:44.263 user 0m33.256s 00:13:44.263 sys 0m16.336s 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:44.263 ************************************ 00:13:44.263 END TEST bdev_fio_rw_verify 00:13:44.263 ************************************ 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1276 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1277 -- # local workload=trim 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1278 -- # local bdev_type= 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1279 -- # local env_context= 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local fio_dir=/usr/src/fio 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # '[' -z trim ']' 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -n '' ']' 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # cat 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1309 -- # '[' trim == verify ']' 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # '[' trim == trim ']' 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo rw=trimwrite 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c42c1078-f846-443f-9610-223a00c65f9c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c42c1078-f846-443f-9610-223a00c65f9c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "9ef3f6a6-72db-42bc-a5b7-dd42ad993e79"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9ef3f6a6-72db-42bc-a5b7-dd42ad993e79",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "f4064b21-000e-47df-beb0-cbbba9ee218e"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f4064b21-000e-47df-beb0-cbbba9ee218e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "1ab3d6c1-316d-4b04-9d03-1cfaed2dc368"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1ab3d6c1-316d-4b04-9d03-1cfaed2dc368",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "936364c6-1aba-46ee-88f1-d725f5629373"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "936364c6-1aba-46ee-88f1-d725f5629373",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "38db825b-2d71-4856-9d9a-38b7e341efcb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "38db825b-2d71-4856-9d9a-38b7e341efcb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n '' ]] 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # popd 00:13:44.263 /home/vagrant/spdk_repo/spdk 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # trap - SIGINT SIGTERM EXIT 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@364 -- # return 0 00:13:44.263 00:13:44.263 real 0m11.378s 00:13:44.263 user 0m33.358s 00:13:44.263 sys 0m16.447s 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:44.263 11:55:41 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:44.263 ************************************ 00:13:44.263 END TEST bdev_fio 00:13:44.263 ************************************ 00:13:44.263 11:55:41 blockdev_xnvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:44.263 11:55:41 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:44.263 11:55:41 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:13:44.263 11:55:41 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:44.263 11:55:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.263 ************************************ 00:13:44.263 START TEST bdev_verify 00:13:44.263 ************************************ 00:13:44.263 11:55:41 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:44.263 [2024-07-21 11:55:42.058882] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:44.263 [2024-07-21 11:55:42.059022] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86414 ] 00:13:44.263 [2024-07-21 11:55:42.218177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:44.263 [2024-07-21 11:55:42.261610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.263 [2024-07-21 11:55:42.261721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.263 Running I/O for 5 seconds... 00:13:49.529 00:13:49.529 Latency(us) 00:13:49.529 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.529 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.529 Verification LBA range: start 0x0 length 0x80000 00:13:49.529 nvme0n1 : 5.05 1623.18 6.34 0.00 0.00 78731.53 6496.36 75552.42 00:13:49.529 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.529 Verification LBA range: start 0x80000 length 0x80000 00:13:49.529 nvme0n1 : 5.04 2359.97 9.22 0.00 0.00 54153.20 13336.15 55176.16 00:13:49.529 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.529 Verification LBA range: start 0x0 length 0x80000 00:13:49.529 nvme0n2 : 5.05 1621.30 6.33 0.00 0.00 78709.32 16026.27 72347.17 00:13:49.529 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.529 Verification LBA range: start 0x80000 length 0x80000 00:13:49.529 nvme0n2 : 5.04 2363.88 9.23 0.00 0.00 53977.62 12363.12 59984.04 00:13:49.529 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.529 Verification LBA range: start 0x0 length 0x80000 00:13:49.529 nvme0n3 : 5.05 1620.87 6.33 0.00 0.00 78602.89 16598.64 78299.78 00:13:49.529 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.529 Verification LBA range: start 0x80000 length 0x80000 00:13:49.529 nvme0n3 : 5.06 2376.47 9.28 0.00 0.00 53609.97 9100.63 64105.08 00:13:49.529 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.529 Verification LBA range: start 0x0 length 0x20000 00:13:49.529 nvme1n1 : 5.06 1619.75 6.33 0.00 0.00 78534.44 7669.72 81047.14 00:13:49.529 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.529 Verification LBA range: start 0x20000 length 0x20000 00:13:49.529 nvme1n1 : 5.06 2353.94 9.20 0.00 0.00 54037.79 6038.47 65936.66 00:13:49.529 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.529 Verification LBA range: start 0x0 length 0xbd0bd 00:13:49.529 nvme2n1 : 5.07 2399.39 9.37 0.00 0.00 52873.48 3620.22 70515.59 00:13:49.529 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.529 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:49.529 nvme2n1 : 5.06 3131.43 12.23 0.00 0.00 40482.05 4063.80 57007.73 00:13:49.529 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.529 Verification LBA range: start 0x0 length 0xa0000 00:13:49.529 nvme3n1 : 5.07 1617.35 6.32 0.00 0.00 78375.95 8184.85 70973.48 00:13:49.529 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.529 Verification LBA range: start 0xa0000 length 0xa0000 00:13:49.529 nvme3n1 : 5.06 2377.07 9.29 0.00 0.00 53330.03 7726.95 59984.04 00:13:49.529 =================================================================================================================== 00:13:49.529 Total : 25464.61 99.47 0.00 0.00 59970.91 3620.22 81047.14 00:13:49.529 00:13:49.529 real 0m5.795s 00:13:49.529 user 0m9.075s 00:13:49.529 sys 0m1.578s 00:13:49.529 11:55:47 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:49.529 11:55:47 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:49.529 ************************************ 00:13:49.529 END TEST bdev_verify 00:13:49.529 ************************************ 00:13:49.529 11:55:47 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:49.529 11:55:47 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 16 -le 1 ']' 00:13:49.529 11:55:47 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:49.529 11:55:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:49.529 ************************************ 00:13:49.529 START TEST bdev_verify_big_io 00:13:49.529 ************************************ 00:13:49.529 11:55:47 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:49.529 [2024-07-21 11:55:47.910639] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:49.529 [2024-07-21 11:55:47.911220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86499 ] 00:13:49.529 [2024-07-21 11:55:48.073746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:49.529 [2024-07-21 11:55:48.119473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:49.529 [2024-07-21 11:55:48.119562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:49.529 Running I/O for 5 seconds... 00:13:56.098 00:13:56.098 Latency(us) 00:13:56.098 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.098 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:56.098 Verification LBA range: start 0x0 length 0x8000 00:13:56.098 nvme0n1 : 5.63 113.70 7.11 0.00 0.00 1084350.69 6868.40 1743658.26 00:13:56.098 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:56.098 Verification LBA range: start 0x8000 length 0x8000 00:13:56.098 nvme0n1 : 5.48 222.04 13.88 0.00 0.00 567358.98 77383.99 505514.37 00:13:56.098 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:56.098 Verification LBA range: start 0x0 length 0x8000 00:13:56.098 nvme0n2 : 5.74 133.78 8.36 0.00 0.00 871284.78 64562.98 1047660.21 00:13:56.098 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:56.098 Verification LBA range: start 0x8000 length 0x8000 00:13:56.098 nvme0n2 : 5.48 221.93 13.87 0.00 0.00 559301.83 6095.71 791239.88 00:13:56.098 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:56.098 Verification LBA range: start 0x0 length 0x8000 00:13:56.098 nvme0n3 : 5.79 95.35 5.96 0.00 0.00 1182138.63 55405.11 3194264.71 00:13:56.098 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:56.098 Verification LBA range: start 0x8000 length 0x8000 00:13:56.098 nvme0n3 : 5.48 207.24 12.95 0.00 0.00 589534.59 73262.95 644713.98 00:13:56.098 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:56.098 Verification LBA range: start 0x0 length 0x2000 00:13:56.098 nvme1n1 : 5.91 129.91 8.12 0.00 0.00 833252.80 44415.66 1868205.28 00:13:56.098 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:56.098 Verification LBA range: start 0x2000 length 0x2000 00:13:56.098 nvme1n1 : 5.49 244.92 15.31 0.00 0.00 490153.73 18888.10 750945.26 00:13:56.098 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:56.098 Verification LBA range: start 0x0 length 0xbd0b 00:13:56.098 nvme2n1 : 6.08 199.88 12.49 0.00 0.00 527610.83 25069.67 754608.41 00:13:56.098 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:56.098 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:56.098 nvme2n1 : 5.49 300.44 18.78 0.00 0.00 393878.98 8013.14 692334.90 00:13:56.098 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:56.098 Verification LBA range: start 0x0 length 0xa000 00:13:56.098 nvme3n1 : 6.14 185.15 11.57 0.00 0.00 551783.88 1609.78 2300456.69 00:13:56.098 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:56.098 Verification LBA range: start 0xa000 length 0xa000 00:13:56.098 nvme3n1 : 5.49 221.39 13.84 0.00 0.00 528226.81 5780.90 641050.83 00:13:56.099 =================================================================================================================== 00:13:56.099 Total : 2275.72 142.23 0.00 0.00 615583.77 1609.78 3194264.71 00:13:56.099 00:13:56.099 real 0m6.910s 00:13:56.099 user 0m12.556s 00:13:56.099 sys 0m0.559s 00:13:56.099 11:55:54 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:56.099 11:55:54 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.099 ************************************ 00:13:56.099 END TEST bdev_verify_big_io 00:13:56.099 ************************************ 00:13:56.099 11:55:54 blockdev_xnvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:56.099 11:55:54 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:56.099 11:55:54 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:56.099 11:55:54 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:56.099 ************************************ 00:13:56.099 START TEST bdev_write_zeroes 00:13:56.099 ************************************ 00:13:56.099 11:55:54 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:56.099 [2024-07-21 11:55:54.872661] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:56.099 [2024-07-21 11:55:54.872831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86602 ] 00:13:56.357 [2024-07-21 11:55:55.045427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.357 [2024-07-21 11:55:55.089014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.615 Running I/O for 1 seconds... 00:13:57.550 00:13:57.550 Latency(us) 00:13:57.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.550 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.550 nvme0n1 : 1.02 10689.74 41.76 0.00 0.00 11962.16 7841.43 19574.94 00:13:57.550 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.550 nvme0n2 : 1.02 10669.45 41.68 0.00 0.00 11976.72 7784.19 19918.37 00:13:57.550 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.550 nvme0n3 : 1.02 10649.15 41.60 0.00 0.00 11993.91 7784.19 20261.79 00:13:57.550 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.550 nvme1n1 : 1.02 10632.09 41.53 0.00 0.00 12007.28 7784.19 20490.73 00:13:57.550 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.550 nvme2n1 : 1.01 12530.69 48.95 0.00 0.00 10177.84 4636.17 16255.22 00:13:57.550 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:57.550 nvme3n1 : 1.02 10710.33 41.84 0.00 0.00 11830.62 3348.35 19117.05 00:13:57.550 =================================================================================================================== 00:13:57.550 Total : 65881.45 257.35 0.00 0.00 11617.85 3348.35 20490.73 00:13:57.809 00:13:57.809 real 0m1.743s 00:13:57.809 user 0m1.026s 00:13:57.809 sys 0m0.556s 00:13:57.809 11:55:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:57.809 11:55:56 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:13:57.809 ************************************ 00:13:57.809 END TEST bdev_write_zeroes 00:13:57.809 ************************************ 00:13:57.809 11:55:56 blockdev_xnvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:57.809 11:55:56 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:57.809 11:55:56 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:57.809 11:55:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.809 ************************************ 00:13:57.809 START TEST bdev_json_nonenclosed 00:13:57.809 ************************************ 00:13:57.809 11:55:56 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.068 [2024-07-21 11:55:56.673174] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:58.068 [2024-07-21 11:55:56.673278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86641 ] 00:13:58.068 [2024-07-21 11:55:56.823874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.068 [2024-07-21 11:55:56.868906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.068 [2024-07-21 11:55:56.868982] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:58.068 [2024-07-21 11:55:56.869008] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:58.068 [2024-07-21 11:55:56.869018] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:58.334 00:13:58.334 real 0m0.374s 00:13:58.334 user 0m0.170s 00:13:58.334 sys 0m0.101s 00:13:58.334 11:55:56 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:58.334 11:55:56 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:13:58.334 ************************************ 00:13:58.334 END TEST bdev_json_nonenclosed 00:13:58.334 ************************************ 00:13:58.334 11:55:57 blockdev_xnvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.334 11:55:57 blockdev_xnvme -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:13:58.334 11:55:57 blockdev_xnvme -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:58.334 11:55:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.334 ************************************ 00:13:58.334 START TEST bdev_json_nonarray 00:13:58.334 ************************************ 00:13:58.334 11:55:57 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.334 [2024-07-21 11:55:57.106708] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:13:58.334 [2024-07-21 11:55:57.106831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86666 ] 00:13:58.608 [2024-07-21 11:55:57.265189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.608 [2024-07-21 11:55:57.308378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.608 [2024-07-21 11:55:57.308474] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:58.608 [2024-07-21 11:55:57.308501] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:13:58.608 [2024-07-21 11:55:57.308511] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:58.608 00:13:58.608 real 0m0.391s 00:13:58.608 user 0m0.170s 00:13:58.608 sys 0m0.118s 00:13:58.608 11:55:57 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:58.608 11:55:57 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:13:58.608 ************************************ 00:13:58.608 END TEST bdev_json_nonarray 00:13:58.608 ************************************ 00:13:58.608 11:55:57 blockdev_xnvme -- bdev/blockdev.sh@787 -- # [[ xnvme == bdev ]] 00:13:58.608 11:55:57 blockdev_xnvme -- bdev/blockdev.sh@794 -- # [[ xnvme == gpt ]] 00:13:58.608 11:55:57 blockdev_xnvme -- bdev/blockdev.sh@798 -- # [[ xnvme == crypto_sw ]] 00:13:58.608 11:55:57 blockdev_xnvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:13:58.608 11:55:57 blockdev_xnvme -- bdev/blockdev.sh@811 -- # cleanup 00:13:58.883 11:55:57 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:58.883 11:55:57 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:58.883 11:55:57 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:13:58.883 11:55:57 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:13:58.883 11:55:57 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:13:58.883 11:55:57 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:13:58.883 11:55:57 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:59.492 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:21.449 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:21.449 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:22.013 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:23.912 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:23.912 00:14:23.912 real 1m10.243s 00:14:23.912 user 1m21.751s 00:14:23.912 sys 0m55.515s 00:14:23.912 11:56:22 blockdev_xnvme -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:23.912 11:56:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:23.912 ************************************ 00:14:23.912 END TEST blockdev_xnvme 00:14:23.912 ************************************ 00:14:23.912 11:56:22 -- spdk/autotest.sh@251 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:23.912 11:56:22 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:23.912 11:56:22 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:23.912 11:56:22 -- common/autotest_common.sh@10 -- # set +x 00:14:23.912 ************************************ 00:14:23.912 START TEST ublk 00:14:23.912 ************************************ 00:14:23.912 11:56:22 ublk -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:23.912 * Looking for test storage... 00:14:23.912 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:23.912 11:56:22 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:23.912 11:56:22 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:23.912 11:56:22 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:23.912 11:56:22 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:23.912 11:56:22 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:23.912 11:56:22 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:23.912 11:56:22 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:23.912 11:56:22 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:23.912 11:56:22 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:23.912 11:56:22 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:23.912 11:56:22 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:23.912 11:56:22 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:23.912 11:56:22 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:23.912 11:56:22 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:23.912 11:56:22 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:23.912 11:56:22 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:23.912 11:56:22 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:23.912 11:56:22 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:23.912 11:56:22 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:23.912 11:56:22 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:23.912 11:56:22 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:23.912 11:56:22 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:23.912 11:56:22 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:23.912 ************************************ 00:14:23.912 START TEST test_save_ublk_config 00:14:23.912 ************************************ 00:14:23.912 11:56:22 ublk.test_save_ublk_config -- common/autotest_common.sh@1121 -- # test_save_config 00:14:23.912 11:56:22 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:23.912 11:56:22 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=86971 00:14:23.912 11:56:22 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:23.912 11:56:22 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:23.912 11:56:22 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 86971 00:14:23.912 11:56:22 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 86971 ']' 00:14:23.912 11:56:22 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.912 11:56:22 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:23.912 11:56:22 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.912 11:56:22 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:23.912 11:56:22 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:23.912 [2024-07-21 11:56:22.680357] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:23.912 [2024-07-21 11:56:22.680488] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86971 ] 00:14:24.171 [2024-07-21 11:56:22.832591] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:24.171 [2024-07-21 11:56:22.888598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:24.735 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:24.735 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:14:24.735 11:56:23 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:24.735 11:56:23 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:24.735 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.735 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:24.735 [2024-07-21 11:56:23.474843] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:24.735 [2024-07-21 11:56:23.475141] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:24.735 malloc0 00:14:24.735 [2024-07-21 11:56:23.506955] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:24.735 [2024-07-21 11:56:23.507056] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:24.735 [2024-07-21 11:56:23.507076] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:24.735 [2024-07-21 11:56:23.507084] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:24.735 [2024-07-21 11:56:23.515910] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:24.735 [2024-07-21 11:56:23.515932] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:24.735 [2024-07-21 11:56:23.522854] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:24.735 [2024-07-21 11:56:23.522948] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:24.735 [2024-07-21 11:56:23.539849] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:24.735 0 00:14:24.735 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.735 11:56:23 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:24.735 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.735 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:24.993 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.993 11:56:23 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:24.993 "subsystems": [ 00:14:24.993 { 00:14:24.993 "subsystem": "keyring", 00:14:24.993 "config": [] 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "subsystem": "iobuf", 00:14:24.993 "config": [ 00:14:24.993 { 00:14:24.993 "method": "iobuf_set_options", 00:14:24.993 "params": { 00:14:24.993 "small_pool_count": 8192, 00:14:24.993 "large_pool_count": 1024, 00:14:24.993 "small_bufsize": 8192, 00:14:24.993 "large_bufsize": 135168 00:14:24.993 } 00:14:24.993 } 00:14:24.993 ] 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "subsystem": "sock", 00:14:24.993 "config": [ 00:14:24.993 { 00:14:24.993 "method": "sock_set_default_impl", 00:14:24.993 "params": { 00:14:24.993 "impl_name": "posix" 00:14:24.993 } 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "method": "sock_impl_set_options", 00:14:24.993 "params": { 00:14:24.993 "impl_name": "ssl", 00:14:24.993 "recv_buf_size": 4096, 00:14:24.993 "send_buf_size": 4096, 00:14:24.993 "enable_recv_pipe": true, 00:14:24.993 "enable_quickack": false, 00:14:24.993 "enable_placement_id": 0, 00:14:24.993 "enable_zerocopy_send_server": true, 00:14:24.993 "enable_zerocopy_send_client": false, 00:14:24.993 "zerocopy_threshold": 0, 00:14:24.993 "tls_version": 0, 00:14:24.993 "enable_ktls": false 00:14:24.993 } 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "method": "sock_impl_set_options", 00:14:24.993 "params": { 00:14:24.993 "impl_name": "posix", 00:14:24.993 "recv_buf_size": 2097152, 00:14:24.993 "send_buf_size": 2097152, 00:14:24.993 "enable_recv_pipe": true, 00:14:24.993 "enable_quickack": false, 00:14:24.993 "enable_placement_id": 0, 00:14:24.993 "enable_zerocopy_send_server": true, 00:14:24.993 "enable_zerocopy_send_client": false, 00:14:24.993 "zerocopy_threshold": 0, 00:14:24.993 "tls_version": 0, 00:14:24.993 "enable_ktls": false 00:14:24.993 } 00:14:24.993 } 00:14:24.993 ] 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "subsystem": "vmd", 00:14:24.993 "config": [] 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "subsystem": "accel", 00:14:24.993 "config": [ 00:14:24.993 { 00:14:24.993 "method": "accel_set_options", 00:14:24.993 "params": { 00:14:24.993 "small_cache_size": 128, 00:14:24.993 "large_cache_size": 16, 00:14:24.993 "task_count": 2048, 00:14:24.993 "sequence_count": 2048, 00:14:24.993 "buf_count": 2048 00:14:24.993 } 00:14:24.993 } 00:14:24.993 ] 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "subsystem": "bdev", 00:14:24.993 "config": [ 00:14:24.993 { 00:14:24.993 "method": "bdev_set_options", 00:14:24.993 "params": { 00:14:24.993 "bdev_io_pool_size": 65535, 00:14:24.993 "bdev_io_cache_size": 256, 00:14:24.993 "bdev_auto_examine": true, 00:14:24.993 "iobuf_small_cache_size": 128, 00:14:24.993 "iobuf_large_cache_size": 16 00:14:24.993 } 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "method": "bdev_raid_set_options", 00:14:24.993 "params": { 00:14:24.993 "process_window_size_kb": 1024 00:14:24.993 } 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "method": "bdev_iscsi_set_options", 00:14:24.993 "params": { 00:14:24.993 "timeout_sec": 30 00:14:24.993 } 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "method": "bdev_nvme_set_options", 00:14:24.993 "params": { 00:14:24.993 "action_on_timeout": "none", 00:14:24.993 "timeout_us": 0, 00:14:24.993 "timeout_admin_us": 0, 00:14:24.993 "keep_alive_timeout_ms": 10000, 00:14:24.993 "arbitration_burst": 0, 00:14:24.993 "low_priority_weight": 0, 00:14:24.993 "medium_priority_weight": 0, 00:14:24.993 "high_priority_weight": 0, 00:14:24.993 "nvme_adminq_poll_period_us": 10000, 00:14:24.993 "nvme_ioq_poll_period_us": 0, 00:14:24.993 "io_queue_requests": 0, 00:14:24.993 "delay_cmd_submit": true, 00:14:24.993 "transport_retry_count": 4, 00:14:24.993 "bdev_retry_count": 3, 00:14:24.993 "transport_ack_timeout": 0, 00:14:24.993 "ctrlr_loss_timeout_sec": 0, 00:14:24.993 "reconnect_delay_sec": 0, 00:14:24.993 "fast_io_fail_timeout_sec": 0, 00:14:24.993 "disable_auto_failback": false, 00:14:24.993 "generate_uuids": false, 00:14:24.993 "transport_tos": 0, 00:14:24.993 "nvme_error_stat": false, 00:14:24.993 "rdma_srq_size": 0, 00:14:24.993 "io_path_stat": false, 00:14:24.993 "allow_accel_sequence": false, 00:14:24.993 "rdma_max_cq_size": 0, 00:14:24.993 "rdma_cm_event_timeout_ms": 0, 00:14:24.993 "dhchap_digests": [ 00:14:24.993 "sha256", 00:14:24.993 "sha384", 00:14:24.993 "sha512" 00:14:24.993 ], 00:14:24.993 "dhchap_dhgroups": [ 00:14:24.993 "null", 00:14:24.993 "ffdhe2048", 00:14:24.993 "ffdhe3072", 00:14:24.993 "ffdhe4096", 00:14:24.993 "ffdhe6144", 00:14:24.993 "ffdhe8192" 00:14:24.993 ] 00:14:24.993 } 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "method": "bdev_nvme_set_hotplug", 00:14:24.993 "params": { 00:14:24.993 "period_us": 100000, 00:14:24.993 "enable": false 00:14:24.993 } 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "method": "bdev_malloc_create", 00:14:24.993 "params": { 00:14:24.993 "name": "malloc0", 00:14:24.993 "num_blocks": 8192, 00:14:24.993 "block_size": 4096, 00:14:24.993 "physical_block_size": 4096, 00:14:24.993 "uuid": "451e4b14-78de-4da5-a814-d46d016c97df", 00:14:24.993 "optimal_io_boundary": 0 00:14:24.993 } 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "method": "bdev_wait_for_examine" 00:14:24.993 } 00:14:24.993 ] 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "subsystem": "scsi", 00:14:24.993 "config": null 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "subsystem": "scheduler", 00:14:24.993 "config": [ 00:14:24.993 { 00:14:24.993 "method": "framework_set_scheduler", 00:14:24.993 "params": { 00:14:24.993 "name": "static" 00:14:24.993 } 00:14:24.993 } 00:14:24.993 ] 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "subsystem": "vhost_scsi", 00:14:24.993 "config": [] 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "subsystem": "vhost_blk", 00:14:24.993 "config": [] 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "subsystem": "ublk", 00:14:24.993 "config": [ 00:14:24.993 { 00:14:24.993 "method": "ublk_create_target", 00:14:24.993 "params": { 00:14:24.993 "cpumask": "1" 00:14:24.993 } 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "method": "ublk_start_disk", 00:14:24.993 "params": { 00:14:24.993 "bdev_name": "malloc0", 00:14:24.993 "ublk_id": 0, 00:14:24.993 "num_queues": 1, 00:14:24.993 "queue_depth": 128 00:14:24.993 } 00:14:24.993 } 00:14:24.993 ] 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "subsystem": "nbd", 00:14:24.993 "config": [] 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "subsystem": "nvmf", 00:14:24.993 "config": [ 00:14:24.993 { 00:14:24.993 "method": "nvmf_set_config", 00:14:24.993 "params": { 00:14:24.993 "discovery_filter": "match_any", 00:14:24.993 "admin_cmd_passthru": { 00:14:24.993 "identify_ctrlr": false 00:14:24.993 } 00:14:24.993 } 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "method": "nvmf_set_max_subsystems", 00:14:24.993 "params": { 00:14:24.993 "max_subsystems": 1024 00:14:24.993 } 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "method": "nvmf_set_crdt", 00:14:24.993 "params": { 00:14:24.993 "crdt1": 0, 00:14:24.993 "crdt2": 0, 00:14:24.993 "crdt3": 0 00:14:24.993 } 00:14:24.993 } 00:14:24.993 ] 00:14:24.993 }, 00:14:24.993 { 00:14:24.993 "subsystem": "iscsi", 00:14:24.993 "config": [ 00:14:24.993 { 00:14:24.993 "method": "iscsi_set_options", 00:14:24.993 "params": { 00:14:24.993 "node_base": "iqn.2016-06.io.spdk", 00:14:24.993 "max_sessions": 128, 00:14:24.993 "max_connections_per_session": 2, 00:14:24.993 "max_queue_depth": 64, 00:14:24.993 "default_time2wait": 2, 00:14:24.993 "default_time2retain": 20, 00:14:24.993 "first_burst_length": 8192, 00:14:24.993 "immediate_data": true, 00:14:24.993 "allow_duplicated_isid": false, 00:14:24.993 "error_recovery_level": 0, 00:14:24.993 "nop_timeout": 60, 00:14:24.993 "nop_in_interval": 30, 00:14:24.993 "disable_chap": false, 00:14:24.993 "require_chap": false, 00:14:24.993 "mutual_chap": false, 00:14:24.993 "chap_group": 0, 00:14:24.993 "max_large_datain_per_connection": 64, 00:14:24.993 "max_r2t_per_connection": 4, 00:14:24.993 "pdu_pool_size": 36864, 00:14:24.993 "immediate_data_pool_size": 16384, 00:14:24.993 "data_out_pool_size": 2048 00:14:24.993 } 00:14:24.993 } 00:14:24.993 ] 00:14:24.993 } 00:14:24.993 ] 00:14:24.993 }' 00:14:24.993 11:56:23 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 86971 00:14:24.993 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 86971 ']' 00:14:24.993 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 86971 00:14:24.993 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:14:24.993 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:24.993 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86971 00:14:24.993 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:24.993 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:24.993 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86971' 00:14:24.993 killing process with pid 86971 00:14:24.993 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 86971 00:14:24.993 11:56:23 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 86971 00:14:25.251 [2024-07-21 11:56:24.090976] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:25.508 [2024-07-21 11:56:24.121915] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:25.508 [2024-07-21 11:56:24.122067] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:25.508 [2024-07-21 11:56:24.128849] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:25.508 [2024-07-21 11:56:24.128902] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:25.508 [2024-07-21 11:56:24.128913] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:25.509 [2024-07-21 11:56:24.128939] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:25.509 [2024-07-21 11:56:24.129101] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:25.767 11:56:24 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:25.767 11:56:24 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=86998 00:14:25.767 11:56:24 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 86998 00:14:25.767 11:56:24 ublk.test_save_ublk_config -- common/autotest_common.sh@827 -- # '[' -z 86998 ']' 00:14:25.767 11:56:24 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:25.767 "subsystems": [ 00:14:25.767 { 00:14:25.767 "subsystem": "keyring", 00:14:25.767 "config": [] 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "subsystem": "iobuf", 00:14:25.767 "config": [ 00:14:25.767 { 00:14:25.767 "method": "iobuf_set_options", 00:14:25.767 "params": { 00:14:25.767 "small_pool_count": 8192, 00:14:25.767 "large_pool_count": 1024, 00:14:25.767 "small_bufsize": 8192, 00:14:25.767 "large_bufsize": 135168 00:14:25.767 } 00:14:25.767 } 00:14:25.767 ] 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "subsystem": "sock", 00:14:25.767 "config": [ 00:14:25.767 { 00:14:25.767 "method": "sock_set_default_impl", 00:14:25.767 "params": { 00:14:25.767 "impl_name": "posix" 00:14:25.767 } 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "method": "sock_impl_set_options", 00:14:25.767 "params": { 00:14:25.767 "impl_name": "ssl", 00:14:25.767 "recv_buf_size": 4096, 00:14:25.767 "send_buf_size": 4096, 00:14:25.767 "enable_recv_pipe": true, 00:14:25.767 "enable_quickack": false, 00:14:25.767 "enable_placement_id": 0, 00:14:25.767 "enable_zerocopy_send_server": true, 00:14:25.767 "enable_zerocopy_send_client": false, 00:14:25.767 "zerocopy_threshold": 0, 00:14:25.767 "tls_version": 0, 00:14:25.767 "enable_ktls": false 00:14:25.767 } 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "method": "sock_impl_set_options", 00:14:25.767 "params": { 00:14:25.767 "impl_name": "posix", 00:14:25.767 "recv_buf_size": 2097152, 00:14:25.767 "send_buf_size": 2097152, 00:14:25.767 "enable_recv_pipe": true, 00:14:25.767 "enable_quickack": false, 00:14:25.767 "enable_placement_id": 0, 00:14:25.767 "enable_zerocopy_send_server": true, 00:14:25.767 "enable_zerocopy_send_client": false, 00:14:25.767 "zerocopy_threshold": 0, 00:14:25.767 "tls_version": 0, 00:14:25.767 "enable_ktls": false 00:14:25.767 } 00:14:25.767 } 00:14:25.767 ] 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "subsystem": "vmd", 00:14:25.767 "config": [] 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "subsystem": "accel", 00:14:25.767 "config": [ 00:14:25.767 { 00:14:25.767 "method": "accel_set_options", 00:14:25.767 "params": { 00:14:25.767 "small_cache_size": 128, 00:14:25.767 "large_cache_size": 16, 00:14:25.767 "task_count": 2048, 00:14:25.767 "sequence_count": 2048, 00:14:25.767 "buf_count": 2048 00:14:25.767 } 00:14:25.767 } 00:14:25.767 ] 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "subsystem": "bdev", 00:14:25.767 "config": [ 00:14:25.767 { 00:14:25.767 "method": "bdev_set_options", 00:14:25.767 "params": { 00:14:25.767 "bdev_io_pool_size": 65535, 00:14:25.767 "bdev_io_cache_size": 256, 00:14:25.767 "bdev_auto_examine": true, 00:14:25.767 "iobuf_small_cache_size": 128, 00:14:25.767 "iobuf_large_cache_size": 16 00:14:25.767 } 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "method": "bdev_raid_set_options", 00:14:25.767 "params": { 00:14:25.767 "process_window_size_kb": 1024 00:14:25.767 } 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "method": "bdev_iscsi_set_options", 00:14:25.767 "params": { 00:14:25.767 "timeout_sec": 30 00:14:25.767 } 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "method": "bdev_nvme_set_options", 00:14:25.767 "params": { 00:14:25.767 "action_on_timeout": "none", 00:14:25.767 "timeout_us": 0, 00:14:25.767 "timeout_admin_us": 0, 00:14:25.767 "keep_alive_timeout_ms": 10000, 00:14:25.767 "arbitration_burst": 0, 00:14:25.767 "low_priority_weight": 0, 00:14:25.767 "medium_priority_weight": 0, 00:14:25.767 "high_priority_weight": 0, 00:14:25.767 "nvme_adminq_poll_period_us": 10000, 00:14:25.767 "nvme_ioq_poll_period_us": 0, 00:14:25.767 "io_queue_requests": 0, 00:14:25.767 "delay_cmd_submit": true, 00:14:25.767 "transport_retry_count": 4, 00:14:25.767 "bdev_retry_count": 3, 00:14:25.767 "transport_ack_timeout": 0, 00:14:25.767 "ctrlr_loss_timeout_sec": 0, 00:14:25.767 "reconnect_delay_sec": 0, 00:14:25.767 "fast_io_fail_timeout_sec": 0, 00:14:25.767 "disable_auto_failback": false, 00:14:25.767 "generate_uuids": false, 00:14:25.767 "transport_tos": 0, 00:14:25.767 "nvme_error_stat": false, 00:14:25.767 "rdma_srq_size": 0, 00:14:25.767 "io_path_stat": false, 00:14:25.767 "allow_accel_sequence": false, 00:14:25.767 "rdma_max_cq_size": 0, 00:14:25.767 "rdma_cm_event_timeout_ms": 0, 00:14:25.767 "dhchap_digests": [ 00:14:25.767 "sha256", 00:14:25.767 "sha384", 00:14:25.767 "sha512" 00:14:25.767 ], 00:14:25.767 "dhchap_dhgroups": [ 00:14:25.767 "null", 00:14:25.767 "ffdhe2048", 00:14:25.767 "ffdhe3072", 00:14:25.767 "ffdhe4096", 00:14:25.767 "ffdhe6144", 00:14:25.767 "ffdhe8192" 00:14:25.767 ] 00:14:25.767 } 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "method": "bdev_nvme_set_hotplug", 00:14:25.767 "params": { 00:14:25.767 "period_us": 100000, 00:14:25.767 "enable": false 00:14:25.767 } 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "method": "bdev_malloc_create", 00:14:25.767 "params": { 00:14:25.767 "name": "malloc0", 00:14:25.767 "num_blocks": 8192, 00:14:25.767 "block_size": 4096, 00:14:25.767 "physical_block_size": 4096, 00:14:25.767 "uuid": "451e4b14-78de-4da5-a814-d46d016c97df", 00:14:25.767 "optimal_io_boundary": 0 00:14:25.767 } 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "method": "bdev_wait_for_examine" 00:14:25.767 } 00:14:25.767 ] 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "subsystem": "scsi", 00:14:25.767 "config": null 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "subsystem": "scheduler", 00:14:25.767 "config": [ 00:14:25.767 { 00:14:25.767 "method": "framework_set_scheduler", 00:14:25.767 "params": { 00:14:25.767 "name": "static" 00:14:25.767 } 00:14:25.767 } 00:14:25.767 ] 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "subsystem": "vhost_scsi", 00:14:25.767 "config": [] 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "subsystem": "vhost_blk", 00:14:25.767 "config": [] 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "subsystem": "ublk", 00:14:25.767 "config": [ 00:14:25.767 { 00:14:25.767 "method": "ublk_create_target", 00:14:25.767 "params": { 00:14:25.767 "cpumask": "1" 00:14:25.767 } 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "method": "ublk_start_disk", 00:14:25.767 "params": { 00:14:25.767 "bdev_name": "malloc0", 00:14:25.767 "ublk_id": 0, 00:14:25.767 "num_queues": 1, 00:14:25.767 "queue_depth": 128 00:14:25.767 } 00:14:25.767 } 00:14:25.767 ] 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "subsystem": "nbd", 00:14:25.767 "config": [] 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "subsystem": "nvmf", 00:14:25.767 "config": [ 00:14:25.767 { 00:14:25.767 "method": "nvmf_set_config", 00:14:25.767 "params": { 00:14:25.767 "discovery_filter": "match_any", 00:14:25.767 "admin_cmd_passthru": { 00:14:25.767 "identify_ctrlr": false 00:14:25.767 } 00:14:25.767 } 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "method": "nvmf_set_max_subsystems", 00:14:25.767 "params": { 00:14:25.767 "max_subsystems": 1024 00:14:25.767 } 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "method": "nvmf_set_crdt", 00:14:25.767 "params": { 00:14:25.767 "crdt1": 0, 00:14:25.767 "crdt2": 0, 00:14:25.767 "crdt3": 0 00:14:25.767 } 00:14:25.767 } 00:14:25.767 ] 00:14:25.767 }, 00:14:25.767 { 00:14:25.767 "subsystem": "iscsi", 00:14:25.767 "config": [ 00:14:25.767 { 00:14:25.767 "method": "iscsi_set_options", 00:14:25.767 "params": { 00:14:25.767 "node_base": "iqn.2016-06.io.spdk", 00:14:25.767 "max_sessions": 128, 00:14:25.767 "max_connections_per_session": 2, 00:14:25.767 "max_queue_depth": 64, 00:14:25.767 "default_time2wait": 2, 00:14:25.767 "default_time2retain": 20, 00:14:25.767 "first_burst_length": 8192, 00:14:25.767 "immediate_data": true, 00:14:25.767 "allow_duplicated_isid": false, 00:14:25.767 "error_recovery_level": 0, 00:14:25.767 "nop_timeout": 60, 00:14:25.767 "nop_in_interval": 30, 00:14:25.767 "disable_chap": false, 00:14:25.767 "require_chap": false, 00:14:25.767 "mutual_chap": false, 00:14:25.767 "chap_group": 0, 00:14:25.767 "max_large_datain_per_connection": 64, 00:14:25.767 "max_r2t_per_connection": 4, 00:14:25.767 "pdu_pool_size": 36864, 00:14:25.767 "immediate_data_pool_size": 16384, 00:14:25.767 "data_out_pool_size": 2048 00:14:25.767 } 00:14:25.767 } 00:14:25.767 ] 00:14:25.767 } 00:14:25.767 ] 00:14:25.767 }' 00:14:25.767 11:56:24 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.767 11:56:24 ublk.test_save_ublk_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:25.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.767 11:56:24 ublk.test_save_ublk_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.767 11:56:24 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:25.767 11:56:24 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:25.767 [2024-07-21 11:56:24.488621] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:25.767 [2024-07-21 11:56:24.488793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86998 ] 00:14:26.025 [2024-07-21 11:56:24.662928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:26.025 [2024-07-21 11:56:24.721252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.283 [2024-07-21 11:56:25.049836] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:26.283 [2024-07-21 11:56:25.050111] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:26.283 [2024-07-21 11:56:25.057959] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:26.283 [2024-07-21 11:56:25.058037] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:26.283 [2024-07-21 11:56:25.058049] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:26.283 [2024-07-21 11:56:25.058056] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:26.283 [2024-07-21 11:56:25.066910] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:26.283 [2024-07-21 11:56:25.066945] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:26.283 [2024-07-21 11:56:25.073846] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:26.283 [2024-07-21 11:56:25.073950] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:26.283 [2024-07-21 11:56:25.090844] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # return 0 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 86998 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@946 -- # '[' -z 86998 ']' 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # kill -0 86998 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # uname 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 86998 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 86998' 00:14:26.558 killing process with pid 86998 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@965 -- # kill 86998 00:14:26.558 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # wait 86998 00:14:26.814 [2024-07-21 11:56:25.571773] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:26.815 [2024-07-21 11:56:25.602914] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:26.815 [2024-07-21 11:56:25.603070] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:26.815 [2024-07-21 11:56:25.610875] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:26.815 [2024-07-21 11:56:25.610940] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:26.815 [2024-07-21 11:56:25.610955] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:26.815 [2024-07-21 11:56:25.610979] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:26.815 [2024-07-21 11:56:25.611126] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:27.072 11:56:25 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:27.072 00:14:27.072 real 0m3.262s 00:14:27.072 user 0m2.422s 00:14:27.072 sys 0m1.370s 00:14:27.072 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:27.072 11:56:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:27.072 ************************************ 00:14:27.072 END TEST test_save_ublk_config 00:14:27.072 ************************************ 00:14:27.072 11:56:25 ublk -- ublk/ublk.sh@139 -- # spdk_pid=87054 00:14:27.072 11:56:25 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:27.072 11:56:25 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:27.072 11:56:25 ublk -- ublk/ublk.sh@141 -- # waitforlisten 87054 00:14:27.072 11:56:25 ublk -- common/autotest_common.sh@827 -- # '[' -z 87054 ']' 00:14:27.072 11:56:25 ublk -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.072 11:56:25 ublk -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:27.072 11:56:25 ublk -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.072 11:56:25 ublk -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:27.072 11:56:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.330 [2024-07-21 11:56:26.005014] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:27.330 [2024-07-21 11:56:26.005114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87054 ] 00:14:27.330 [2024-07-21 11:56:26.160006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:27.588 [2024-07-21 11:56:26.218398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.588 [2024-07-21 11:56:26.218499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.154 11:56:26 ublk -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:28.154 11:56:26 ublk -- common/autotest_common.sh@860 -- # return 0 00:14:28.154 11:56:26 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:28.154 11:56:26 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:28.154 11:56:26 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:28.154 11:56:26 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.154 ************************************ 00:14:28.154 START TEST test_create_ublk 00:14:28.154 ************************************ 00:14:28.154 11:56:26 ublk.test_create_ublk -- common/autotest_common.sh@1121 -- # test_create_ublk 00:14:28.154 11:56:26 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:28.154 11:56:26 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.154 11:56:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.154 [2024-07-21 11:56:26.774841] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:28.154 [2024-07-21 11:56:26.776172] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:28.154 11:56:26 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.154 11:56:26 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:28.154 11:56:26 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:28.154 11:56:26 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.154 11:56:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.154 11:56:26 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.154 11:56:26 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:28.154 11:56:26 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:28.154 11:56:26 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.154 11:56:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.154 [2024-07-21 11:56:26.846967] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:28.154 [2024-07-21 11:56:26.847366] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:28.154 [2024-07-21 11:56:26.847387] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:28.154 [2024-07-21 11:56:26.847395] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:28.154 [2024-07-21 11:56:26.855161] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:28.154 [2024-07-21 11:56:26.855189] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:28.154 [2024-07-21 11:56:26.862862] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:28.154 [2024-07-21 11:56:26.869885] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:28.154 [2024-07-21 11:56:26.882845] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:28.154 11:56:26 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.154 11:56:26 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:28.154 11:56:26 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:28.154 11:56:26 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:28.154 11:56:26 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.154 11:56:26 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:28.154 11:56:26 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.154 11:56:26 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:28.154 { 00:14:28.154 "ublk_device": "/dev/ublkb0", 00:14:28.154 "id": 0, 00:14:28.154 "queue_depth": 512, 00:14:28.154 "num_queues": 4, 00:14:28.154 "bdev_name": "Malloc0" 00:14:28.154 } 00:14:28.154 ]' 00:14:28.154 11:56:26 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:28.154 11:56:26 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:28.154 11:56:26 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:28.154 11:56:26 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:28.154 11:56:26 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:28.413 11:56:27 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:28.413 11:56:27 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:28.413 11:56:27 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:28.413 11:56:27 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:28.413 11:56:27 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:28.413 11:56:27 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:28.413 11:56:27 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:28.413 11:56:27 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:28.413 11:56:27 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:28.413 11:56:27 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:28.413 11:56:27 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:28.413 11:56:27 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:28.413 11:56:27 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:28.413 11:56:27 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:28.413 11:56:27 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:28.413 11:56:27 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:28.413 11:56:27 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:28.413 fio: verification read phase will never start because write phase uses all of runtime 00:14:28.413 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:28.413 fio-3.35 00:14:28.413 Starting 1 process 00:14:40.682 00:14:40.682 fio_test: (groupid=0, jobs=1): err= 0: pid=87093: Sun Jul 21 11:56:37 2024 00:14:40.682 write: IOPS=15.5k, BW=60.6MiB/s (63.5MB/s)(606MiB/10001msec); 0 zone resets 00:14:40.682 clat (usec): min=35, max=4105, avg=63.71, stdev=131.94 00:14:40.682 lat (usec): min=36, max=4106, avg=64.12, stdev=131.95 00:14:40.682 clat percentiles (usec): 00:14:40.682 | 1.00th=[ 52], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 56], 00:14:40.682 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 58], 60.00th=[ 58], 00:14:40.682 | 70.00th=[ 59], 80.00th=[ 60], 90.00th=[ 63], 95.00th=[ 66], 00:14:40.682 | 99.00th=[ 74], 99.50th=[ 80], 99.90th=[ 3097], 99.95th=[ 3490], 00:14:40.682 | 99.99th=[ 3785] 00:14:40.682 bw ( KiB/s): min=28528, max=66608, per=99.84%, avg=61930.11, stdev=9605.02, samples=19 00:14:40.682 iops : min= 7132, max=16652, avg=15482.53, stdev=2401.26, samples=19 00:14:40.682 lat (usec) : 50=0.16%, 100=99.57%, 250=0.01%, 500=0.01%, 750=0.01% 00:14:40.682 lat (usec) : 1000=0.01% 00:14:40.682 lat (msec) : 2=0.06%, 4=0.16%, 10=0.01% 00:14:40.682 cpu : usr=2.01%, sys=7.98%, ctx=155096, majf=0, minf=796 00:14:40.682 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:40.682 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.682 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:40.682 issued rwts: total=0,155093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:40.682 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:40.682 00:14:40.682 Run status group 0 (all jobs): 00:14:40.682 WRITE: bw=60.6MiB/s (63.5MB/s), 60.6MiB/s-60.6MiB/s (63.5MB/s-63.5MB/s), io=606MiB (635MB), run=10001-10001msec 00:14:40.682 00:14:40.682 Disk stats (read/write): 00:14:40.682 ublkb0: ios=0/153450, merge=0/0, ticks=0/8954, in_queue=8954, util=99.06% 00:14:40.682 11:56:37 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:40.682 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.682 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.682 [2024-07-21 11:56:37.347638] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:40.682 [2024-07-21 11:56:37.382936] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:40.682 [2024-07-21 11:56:37.383994] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:40.682 [2024-07-21 11:56:37.391091] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:40.682 [2024-07-21 11:56:37.391374] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:40.682 [2024-07-21 11:56:37.391396] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:40.682 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.682 11:56:37 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:40.682 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@648 -- # local es=0 00:14:40.682 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:40.682 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:14:40.682 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:40.682 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:14:40.682 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:40.682 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # rpc_cmd ublk_stop_disk 0 00:14:40.682 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.682 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.682 [2024-07-21 11:56:37.405981] ublk.c:1071:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:40.682 request: 00:14:40.682 { 00:14:40.682 "ublk_id": 0, 00:14:40.682 "method": "ublk_stop_disk", 00:14:40.682 "req_id": 1 00:14:40.682 } 00:14:40.682 Got JSON-RPC error response 00:14:40.682 response: 00:14:40.682 { 00:14:40.682 "code": -19, 00:14:40.682 "message": "No such device" 00:14:40.683 } 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@651 -- # es=1 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:40.683 11:56:37 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 [2024-07-21 11:56:37.422919] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:40.683 [2024-07-21 11:56:37.424589] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:40.683 [2024-07-21 11:56:37.424626] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.683 11:56:37 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.683 11:56:37 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:40.683 11:56:37 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.683 11:56:37 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:40.683 11:56:37 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:40.683 11:56:37 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:40.683 11:56:37 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.683 11:56:37 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:40.683 11:56:37 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:40.683 11:56:37 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:40.683 00:14:40.683 real 0m10.827s 00:14:40.683 user 0m0.573s 00:14:40.683 sys 0m0.912s 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:40.683 11:56:37 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 ************************************ 00:14:40.683 END TEST test_create_ublk 00:14:40.683 ************************************ 00:14:40.683 11:56:37 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:40.683 11:56:37 ublk -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:40.683 11:56:37 ublk -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:40.683 11:56:37 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 ************************************ 00:14:40.683 START TEST test_create_multi_ublk 00:14:40.683 ************************************ 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@1121 -- # test_create_multi_ublk 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 [2024-07-21 11:56:37.652845] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:40.683 [2024-07-21 11:56:37.653885] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 [2024-07-21 11:56:37.732986] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:40.683 [2024-07-21 11:56:37.733427] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:40.683 [2024-07-21 11:56:37.733444] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:40.683 [2024-07-21 11:56:37.733453] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:40.683 [2024-07-21 11:56:37.742097] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:40.683 [2024-07-21 11:56:37.742122] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:40.683 [2024-07-21 11:56:37.748856] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:40.683 [2024-07-21 11:56:37.749400] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:40.683 [2024-07-21 11:56:37.765851] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 [2024-07-21 11:56:37.841971] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:40.683 [2024-07-21 11:56:37.842343] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:40.683 [2024-07-21 11:56:37.842378] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:40.683 [2024-07-21 11:56:37.842384] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:40.683 [2024-07-21 11:56:37.854105] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:40.683 [2024-07-21 11:56:37.854123] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:40.683 [2024-07-21 11:56:37.864864] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:40.683 [2024-07-21 11:56:37.865434] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:40.683 [2024-07-21 11:56:37.888866] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.683 11:56:37 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 [2024-07-21 11:56:37.966993] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:40.683 [2024-07-21 11:56:37.967467] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:40.683 [2024-07-21 11:56:37.967486] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:40.683 [2024-07-21 11:56:37.967497] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:40.683 [2024-07-21 11:56:37.974857] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:40.683 [2024-07-21 11:56:37.974882] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:40.683 [2024-07-21 11:56:37.982842] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:40.683 [2024-07-21 11:56:37.983472] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:40.683 [2024-07-21 11:56:37.995853] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:40.683 11:56:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.683 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:40.683 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.683 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:40.683 11:56:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.683 11:56:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 11:56:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.683 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:40.683 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:40.683 11:56:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.683 11:56:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.683 [2024-07-21 11:56:38.074975] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:40.683 [2024-07-21 11:56:38.075388] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:40.683 [2024-07-21 11:56:38.075407] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:40.684 [2024-07-21 11:56:38.075414] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:40.684 [2024-07-21 11:56:38.082875] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:40.684 [2024-07-21 11:56:38.082897] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:40.684 [2024-07-21 11:56:38.090859] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:40.684 [2024-07-21 11:56:38.091432] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:40.684 [2024-07-21 11:56:38.099883] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:40.684 { 00:14:40.684 "ublk_device": "/dev/ublkb0", 00:14:40.684 "id": 0, 00:14:40.684 "queue_depth": 512, 00:14:40.684 "num_queues": 4, 00:14:40.684 "bdev_name": "Malloc0" 00:14:40.684 }, 00:14:40.684 { 00:14:40.684 "ublk_device": "/dev/ublkb1", 00:14:40.684 "id": 1, 00:14:40.684 "queue_depth": 512, 00:14:40.684 "num_queues": 4, 00:14:40.684 "bdev_name": "Malloc1" 00:14:40.684 }, 00:14:40.684 { 00:14:40.684 "ublk_device": "/dev/ublkb2", 00:14:40.684 "id": 2, 00:14:40.684 "queue_depth": 512, 00:14:40.684 "num_queues": 4, 00:14:40.684 "bdev_name": "Malloc2" 00:14:40.684 }, 00:14:40.684 { 00:14:40.684 "ublk_device": "/dev/ublkb3", 00:14:40.684 "id": 3, 00:14:40.684 "queue_depth": 512, 00:14:40.684 "num_queues": 4, 00:14:40.684 "bdev_name": "Malloc3" 00:14:40.684 } 00:14:40.684 ]' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.684 11:56:38 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.684 [2024-07-21 11:56:38.969946] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:40.684 [2024-07-21 11:56:39.009314] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:40.684 [2024-07-21 11:56:39.010642] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:40.684 [2024-07-21 11:56:39.016873] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:40.684 [2024-07-21 11:56:39.017157] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:40.684 [2024-07-21 11:56:39.017170] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.684 [2024-07-21 11:56:39.032983] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:40.684 [2024-07-21 11:56:39.062848] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:40.684 [2024-07-21 11:56:39.063993] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:40.684 [2024-07-21 11:56:39.070874] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:40.684 [2024-07-21 11:56:39.071186] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:40.684 [2024-07-21 11:56:39.071206] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.684 [2024-07-21 11:56:39.078007] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:40.684 [2024-07-21 11:56:39.110912] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:40.684 [2024-07-21 11:56:39.112010] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:40.684 [2024-07-21 11:56:39.118958] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:40.684 [2024-07-21 11:56:39.119223] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:40.684 [2024-07-21 11:56:39.119251] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.684 [2024-07-21 11:56:39.132916] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:40.684 [2024-07-21 11:56:39.170875] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:40.684 [2024-07-21 11:56:39.171918] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:40.684 [2024-07-21 11:56:39.179930] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:40.684 [2024-07-21 11:56:39.180227] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:40.684 [2024-07-21 11:56:39.180245] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:40.684 [2024-07-21 11:56:39.385981] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:40.684 [2024-07-21 11:56:39.387152] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:40.684 [2024-07-21 11:56:39.387194] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.684 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.685 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.685 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:40.685 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.685 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.685 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.685 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.685 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:40.685 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.685 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:40.943 00:14:40.943 real 0m2.091s 00:14:40.943 user 0m1.050s 00:14:40.943 sys 0m0.181s 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:40.943 11:56:39 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.943 ************************************ 00:14:40.943 END TEST test_create_multi_ublk 00:14:40.943 ************************************ 00:14:40.943 11:56:39 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:40.943 11:56:39 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:40.943 11:56:39 ublk -- ublk/ublk.sh@130 -- # killprocess 87054 00:14:40.943 11:56:39 ublk -- common/autotest_common.sh@946 -- # '[' -z 87054 ']' 00:14:40.943 11:56:39 ublk -- common/autotest_common.sh@950 -- # kill -0 87054 00:14:40.943 11:56:39 ublk -- common/autotest_common.sh@951 -- # uname 00:14:40.943 11:56:39 ublk -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:40.943 11:56:39 ublk -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87054 00:14:41.202 11:56:39 ublk -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:14:41.202 11:56:39 ublk -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:14:41.202 killing process with pid 87054 00:14:41.202 11:56:39 ublk -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87054' 00:14:41.202 11:56:39 ublk -- common/autotest_common.sh@965 -- # kill 87054 00:14:41.202 11:56:39 ublk -- common/autotest_common.sh@970 -- # wait 87054 00:14:41.202 [2024-07-21 11:56:39.946577] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:14:41.202 [2024-07-21 11:56:39.946647] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:14:41.461 00:14:41.461 real 0m17.650s 00:14:41.461 user 0m28.519s 00:14:41.461 sys 0m6.437s 00:14:41.461 11:56:40 ublk -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:41.461 11:56:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:41.461 ************************************ 00:14:41.461 END TEST ublk 00:14:41.461 ************************************ 00:14:41.461 11:56:40 -- spdk/autotest.sh@252 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:41.461 11:56:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:14:41.461 11:56:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:41.461 11:56:40 -- common/autotest_common.sh@10 -- # set +x 00:14:41.461 ************************************ 00:14:41.461 START TEST ublk_recovery 00:14:41.461 ************************************ 00:14:41.461 11:56:40 ublk_recovery -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:41.719 * Looking for test storage... 00:14:41.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:41.719 11:56:40 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:41.719 11:56:40 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:41.719 11:56:40 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:41.719 11:56:40 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:41.719 11:56:40 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:41.719 11:56:40 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:41.719 11:56:40 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:41.719 11:56:40 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:41.719 11:56:40 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:41.719 11:56:40 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:41.719 11:56:40 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=87392 00:14:41.719 11:56:40 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:41.719 11:56:40 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:41.719 11:56:40 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 87392 00:14:41.719 11:56:40 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 87392 ']' 00:14:41.719 11:56:40 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.719 11:56:40 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:41.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.719 11:56:40 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.719 11:56:40 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:41.719 11:56:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:41.719 [2024-07-21 11:56:40.449183] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:41.719 [2024-07-21 11:56:40.449657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87392 ] 00:14:41.978 [2024-07-21 11:56:40.596405] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:41.978 [2024-07-21 11:56:40.640949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.978 [2024-07-21 11:56:40.641070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.545 11:56:41 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:42.545 11:56:41 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:14:42.545 11:56:41 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:42.545 11:56:41 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.545 11:56:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.545 [2024-07-21 11:56:41.231842] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:42.545 [2024-07-21 11:56:41.233187] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:42.545 11:56:41 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.545 11:56:41 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:42.545 11:56:41 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.545 11:56:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.545 malloc0 00:14:42.545 11:56:41 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.545 11:56:41 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:42.545 11:56:41 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.545 11:56:41 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:42.545 [2024-07-21 11:56:41.270979] ublk.c:1908:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:42.545 [2024-07-21 11:56:41.271097] ublk.c:1949:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:42.545 [2024-07-21 11:56:41.271112] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:42.545 [2024-07-21 11:56:41.271118] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:42.545 [2024-07-21 11:56:41.279994] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:42.545 [2024-07-21 11:56:41.280014] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:42.545 [2024-07-21 11:56:41.286861] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:42.545 [2024-07-21 11:56:41.286988] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:42.545 [2024-07-21 11:56:41.296853] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:42.545 1 00:14:42.545 11:56:41 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.545 11:56:41 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:43.480 11:56:42 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=87420 00:14:43.480 11:56:42 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:43.480 11:56:42 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:43.739 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:43.739 fio-3.35 00:14:43.739 Starting 1 process 00:14:49.021 11:56:47 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 87392 00:14:49.021 11:56:47 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:54.304 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 87392 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:54.304 11:56:52 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=87532 00:14:54.304 11:56:52 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:54.304 11:56:52 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:54.304 11:56:52 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 87532 00:14:54.304 11:56:52 ublk_recovery -- common/autotest_common.sh@827 -- # '[' -z 87532 ']' 00:14:54.304 11:56:52 ublk_recovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.304 11:56:52 ublk_recovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:54.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.304 11:56:52 ublk_recovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.304 11:56:52 ublk_recovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:54.304 11:56:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:54.304 [2024-07-21 11:56:52.393167] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:14:54.304 [2024-07-21 11:56:52.393272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87532 ] 00:14:54.304 [2024-07-21 11:56:52.555237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:54.304 [2024-07-21 11:56:52.600408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:54.304 [2024-07-21 11:56:52.600519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.566 11:56:53 ublk_recovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:54.566 11:56:53 ublk_recovery -- common/autotest_common.sh@860 -- # return 0 00:14:54.566 11:56:53 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:54.566 11:56:53 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.566 11:56:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:54.566 [2024-07-21 11:56:53.199840] ublk.c: 537:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:54.566 [2024-07-21 11:56:53.201160] ublk.c: 742:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:54.566 11:56:53 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.566 11:56:53 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:54.566 11:56:53 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.566 11:56:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:54.566 malloc0 00:14:54.566 11:56:53 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.566 11:56:53 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:54.566 11:56:53 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.566 11:56:53 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:54.566 [2024-07-21 11:56:53.238971] ublk.c:2095:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:54.566 [2024-07-21 11:56:53.239032] ublk.c: 955:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:54.566 [2024-07-21 11:56:53.239042] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:54.566 1 00:14:54.566 [2024-07-21 11:56:53.246890] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:54.566 [2024-07-21 11:56:53.246917] ublk.c:2024:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:54.566 [2024-07-21 11:56:53.246994] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:54.566 11:56:53 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.566 11:56:53 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 87420 00:14:54.566 [2024-07-21 11:56:53.253855] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:54.566 [2024-07-21 11:56:53.260786] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:54.566 [2024-07-21 11:56:53.267856] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:54.566 [2024-07-21 11:56:53.267878] ublk.c: 378:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:50.799 00:15:50.799 fio_test: (groupid=0, jobs=1): err= 0: pid=87427: Sun Jul 21 11:57:42 2024 00:15:50.799 read: IOPS=25.3k, BW=98.9MiB/s (104MB/s)(5937MiB/60002msec) 00:15:50.799 slat (nsec): min=1140, max=313810, avg=6436.51, stdev=2176.60 00:15:50.799 clat (usec): min=983, max=5963.5k, avg=2474.74, stdev=38062.60 00:15:50.799 lat (usec): min=990, max=5963.5k, avg=2481.18, stdev=38062.59 00:15:50.799 clat percentiles (usec): 00:15:50.799 | 1.00th=[ 1795], 5.00th=[ 1958], 10.00th=[ 1991], 20.00th=[ 2024], 00:15:50.799 | 30.00th=[ 2057], 40.00th=[ 2073], 50.00th=[ 2114], 60.00th=[ 2114], 00:15:50.799 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2343], 95.00th=[ 3359], 00:15:50.799 | 99.00th=[ 4686], 99.50th=[ 5276], 99.90th=[ 6521], 99.95th=[ 7177], 00:15:50.799 | 99.99th=[12518] 00:15:50.799 bw ( KiB/s): min=28264, max=116952, per=100.00%, avg=111639.26, stdev=11010.70, samples=108 00:15:50.799 iops : min= 7066, max=29238, avg=27909.81, stdev=2752.71, samples=108 00:15:50.799 write: IOPS=25.3k, BW=98.8MiB/s (104MB/s)(5929MiB/60002msec); 0 zone resets 00:15:50.799 slat (nsec): min=1139, max=461200, avg=6528.72, stdev=2233.17 00:15:50.799 clat (usec): min=1002, max=5964.0k, avg=2567.34, stdev=39301.03 00:15:50.799 lat (usec): min=1010, max=5964.1k, avg=2573.87, stdev=39301.03 00:15:50.799 clat percentiles (usec): 00:15:50.799 | 1.00th=[ 1795], 5.00th=[ 1942], 10.00th=[ 2073], 20.00th=[ 2114], 00:15:50.799 | 30.00th=[ 2147], 40.00th=[ 2180], 50.00th=[ 2212], 60.00th=[ 2212], 00:15:50.799 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2409], 95.00th=[ 3359], 00:15:50.799 | 99.00th=[ 4686], 99.50th=[ 5276], 99.90th=[ 6652], 99.95th=[ 7308], 00:15:50.799 | 99.99th=[12649] 00:15:50.799 bw ( KiB/s): min=28888, max=115712, per=100.00%, avg=111485.52, stdev=10906.96, samples=108 00:15:50.799 iops : min= 7222, max=28928, avg=27871.36, stdev=2726.77, samples=108 00:15:50.799 lat (usec) : 1000=0.01% 00:15:50.799 lat (msec) : 2=9.07%, 4=88.23%, 10=2.69%, 20=0.01%, >=2000=0.01% 00:15:50.799 cpu : usr=9.07%, sys=32.62%, ctx=126574, majf=0, minf=13 00:15:50.799 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:50.799 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:50.799 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:50.799 issued rwts: total=1519863,1517899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:50.799 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:50.799 00:15:50.799 Run status group 0 (all jobs): 00:15:50.799 READ: bw=98.9MiB/s (104MB/s), 98.9MiB/s-98.9MiB/s (104MB/s-104MB/s), io=5937MiB (6225MB), run=60002-60002msec 00:15:50.799 WRITE: bw=98.8MiB/s (104MB/s), 98.8MiB/s-98.8MiB/s (104MB/s-104MB/s), io=5929MiB (6217MB), run=60002-60002msec 00:15:50.799 00:15:50.799 Disk stats (read/write): 00:15:50.799 ublkb1: ios=1516704/1514784, merge=0/0, ticks=3659118/3659992, in_queue=7319111, util=99.92% 00:15:50.799 11:57:42 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:50.799 [2024-07-21 11:57:42.573837] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:50.799 [2024-07-21 11:57:42.614857] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:50.799 [2024-07-21 11:57:42.615107] ublk.c: 434:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:50.799 [2024-07-21 11:57:42.628872] ublk.c: 328:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:50.799 [2024-07-21 11:57:42.629019] ublk.c: 969:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:50.799 [2024-07-21 11:57:42.629039] ublk.c:1803:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.799 11:57:42 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:50.799 [2024-07-21 11:57:42.636946] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:50.799 [2024-07-21 11:57:42.638570] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:50.799 [2024-07-21 11:57:42.638617] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.799 11:57:42 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:50.799 11:57:42 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:50.799 11:57:42 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 87532 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@946 -- # '[' -z 87532 ']' 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@950 -- # kill -0 87532 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@951 -- # uname 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 87532 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:50.799 killing process with pid 87532 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 87532' 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@965 -- # kill 87532 00:15:50.799 11:57:42 ublk_recovery -- common/autotest_common.sh@970 -- # wait 87532 00:15:50.799 [2024-07-21 11:57:42.813753] ublk.c: 819:_ublk_fini: *DEBUG*: finish shutdown 00:15:50.799 [2024-07-21 11:57:42.813828] ublk.c: 750:_ublk_fini_done: *DEBUG*: 00:15:50.799 00:15:50.799 real 1m2.839s 00:15:50.799 user 1m44.106s 00:15:50.799 sys 0m36.043s 00:15:50.799 11:57:43 ublk_recovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:50.799 11:57:43 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:50.799 ************************************ 00:15:50.799 END TEST ublk_recovery 00:15:50.799 ************************************ 00:15:50.799 11:57:43 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:15:50.799 11:57:43 -- spdk/autotest.sh@260 -- # timing_exit lib 00:15:50.799 11:57:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:50.799 11:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:50.799 11:57:43 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:15:50.799 11:57:43 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:15:50.799 11:57:43 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:15:50.799 11:57:43 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:15:50.799 11:57:43 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:15:50.799 11:57:43 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:15:50.799 11:57:43 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:15:50.799 11:57:43 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:15:50.799 11:57:43 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:15:50.799 11:57:43 -- spdk/autotest.sh@339 -- # '[' 1 -eq 1 ']' 00:15:50.799 11:57:43 -- spdk/autotest.sh@340 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:50.799 11:57:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:15:50.799 11:57:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:50.799 11:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:50.799 ************************************ 00:15:50.799 START TEST ftl 00:15:50.799 ************************************ 00:15:50.799 11:57:43 ftl -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:50.799 * Looking for test storage... 00:15:50.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:50.799 11:57:43 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:50.799 11:57:43 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:50.799 11:57:43 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:50.799 11:57:43 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:50.799 11:57:43 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:50.799 11:57:43 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:50.799 11:57:43 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.799 11:57:43 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:50.799 11:57:43 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:50.799 11:57:43 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:50.799 11:57:43 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:50.799 11:57:43 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:50.799 11:57:43 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:50.799 11:57:43 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:50.799 11:57:43 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:50.799 11:57:43 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:50.799 11:57:43 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:50.799 11:57:43 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:50.799 11:57:43 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:50.799 11:57:43 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:50.799 11:57:43 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:50.799 11:57:43 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:50.799 11:57:43 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:50.799 11:57:43 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:50.799 11:57:43 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:50.799 11:57:43 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:50.799 11:57:43 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:50.799 11:57:43 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.799 11:57:43 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.799 11:57:43 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.799 11:57:43 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:50.799 11:57:43 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:50.799 11:57:43 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:50.799 11:57:43 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:50.799 11:57:43 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:50.799 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:50.799 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:50.799 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:50.799 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:50.799 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:50.799 11:57:44 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=88314 00:15:50.799 11:57:44 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:50.799 11:57:44 ftl -- ftl/ftl.sh@38 -- # waitforlisten 88314 00:15:50.799 11:57:44 ftl -- common/autotest_common.sh@827 -- # '[' -z 88314 ']' 00:15:50.799 11:57:44 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.799 11:57:44 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:50.799 11:57:44 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.799 11:57:44 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:50.799 11:57:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:50.799 [2024-07-21 11:57:44.167655] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:50.799 [2024-07-21 11:57:44.167780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88314 ] 00:15:50.799 [2024-07-21 11:57:44.326649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.799 [2024-07-21 11:57:44.371703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.799 11:57:44 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:50.799 11:57:44 ftl -- common/autotest_common.sh@860 -- # return 0 00:15:50.799 11:57:44 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:50.799 11:57:45 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:50.799 11:57:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:50.799 11:57:45 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:50.799 11:57:45 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:50.799 11:57:45 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:50.799 11:57:45 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@50 -- # break 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@63 -- # break 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@66 -- # killprocess 88314 00:15:50.799 11:57:46 ftl -- common/autotest_common.sh@946 -- # '[' -z 88314 ']' 00:15:50.799 11:57:46 ftl -- common/autotest_common.sh@950 -- # kill -0 88314 00:15:50.799 11:57:46 ftl -- common/autotest_common.sh@951 -- # uname 00:15:50.799 11:57:46 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:50.799 11:57:46 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88314 00:15:50.799 11:57:46 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:50.799 11:57:46 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:50.799 killing process with pid 88314 00:15:50.799 11:57:46 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88314' 00:15:50.799 11:57:46 ftl -- common/autotest_common.sh@965 -- # kill 88314 00:15:50.799 11:57:46 ftl -- common/autotest_common.sh@970 -- # wait 88314 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:50.799 11:57:46 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:50.799 11:57:46 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:15:50.799 11:57:46 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:50.799 11:57:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:50.799 ************************************ 00:15:50.799 START TEST ftl_fio_basic 00:15:50.799 ************************************ 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:50.799 * Looking for test storage... 00:15:50.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:50.799 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=88423 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 88423 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@827 -- # '[' -z 88423 ']' 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:50.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:50.800 11:57:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:50.800 [2024-07-21 11:57:46.999683] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:15:50.800 [2024-07-21 11:57:46.999878] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88423 ] 00:15:50.800 [2024-07-21 11:57:47.162719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:50.800 [2024-07-21 11:57:47.208677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.800 [2024-07-21 11:57:47.208765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.800 [2024-07-21 11:57:47.208908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:50.800 11:57:47 ftl.ftl_fio_basic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:50.800 11:57:47 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # return 0 00:15:50.800 11:57:47 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:50.800 11:57:47 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:50.800 11:57:47 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:50.800 11:57:47 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:50.800 11:57:47 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:50.800 11:57:47 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:50.800 { 00:15:50.800 "name": "nvme0n1", 00:15:50.800 "aliases": [ 00:15:50.800 "09580d46-7f1e-4203-9a20-255226ec1c18" 00:15:50.800 ], 00:15:50.800 "product_name": "NVMe disk", 00:15:50.800 "block_size": 4096, 00:15:50.800 "num_blocks": 1310720, 00:15:50.800 "uuid": "09580d46-7f1e-4203-9a20-255226ec1c18", 00:15:50.800 "assigned_rate_limits": { 00:15:50.800 "rw_ios_per_sec": 0, 00:15:50.800 "rw_mbytes_per_sec": 0, 00:15:50.800 "r_mbytes_per_sec": 0, 00:15:50.800 "w_mbytes_per_sec": 0 00:15:50.800 }, 00:15:50.800 "claimed": false, 00:15:50.800 "zoned": false, 00:15:50.800 "supported_io_types": { 00:15:50.800 "read": true, 00:15:50.800 "write": true, 00:15:50.800 "unmap": true, 00:15:50.800 "write_zeroes": true, 00:15:50.800 "flush": true, 00:15:50.800 "reset": true, 00:15:50.800 "compare": true, 00:15:50.800 "compare_and_write": false, 00:15:50.800 "abort": true, 00:15:50.800 "nvme_admin": true, 00:15:50.800 "nvme_io": true 00:15:50.800 }, 00:15:50.800 "driver_specific": { 00:15:50.800 "nvme": [ 00:15:50.800 { 00:15:50.800 "pci_address": "0000:00:11.0", 00:15:50.800 "trid": { 00:15:50.800 "trtype": "PCIe", 00:15:50.800 "traddr": "0000:00:11.0" 00:15:50.800 }, 00:15:50.800 "ctrlr_data": { 00:15:50.800 "cntlid": 0, 00:15:50.800 "vendor_id": "0x1b36", 00:15:50.800 "model_number": "QEMU NVMe Ctrl", 00:15:50.800 "serial_number": "12341", 00:15:50.800 "firmware_revision": "8.0.0", 00:15:50.800 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:50.800 "oacs": { 00:15:50.800 "security": 0, 00:15:50.800 "format": 1, 00:15:50.800 "firmware": 0, 00:15:50.800 "ns_manage": 1 00:15:50.800 }, 00:15:50.800 "multi_ctrlr": false, 00:15:50.800 "ana_reporting": false 00:15:50.800 }, 00:15:50.800 "vs": { 00:15:50.800 "nvme_version": "1.4" 00:15:50.800 }, 00:15:50.800 "ns_data": { 00:15:50.800 "id": 1, 00:15:50.800 "can_share": false 00:15:50.800 } 00:15:50.800 } 00:15:50.800 ], 00:15:50.800 "mp_policy": "active_passive" 00:15:50.800 } 00:15:50.800 } 00:15:50.800 ]' 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=1310720 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 5120 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=29c949f6-0f88-40d7-b3ef-fd3508f2f796 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 29c949f6-0f88-40d7-b3ef-fd3508f2f796 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=7401faa4-5a34-4a9a-be1d-003b79004e06 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7401faa4-5a34-4a9a-be1d-003b79004e06 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=7401faa4-5a34-4a9a-be1d-003b79004e06 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 7401faa4-5a34-4a9a-be1d-003b79004e06 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=7401faa4-5a34-4a9a-be1d-003b79004e06 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:50.800 11:57:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7401faa4-5a34-4a9a-be1d-003b79004e06 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:50.800 { 00:15:50.800 "name": "7401faa4-5a34-4a9a-be1d-003b79004e06", 00:15:50.800 "aliases": [ 00:15:50.800 "lvs/nvme0n1p0" 00:15:50.800 ], 00:15:50.800 "product_name": "Logical Volume", 00:15:50.800 "block_size": 4096, 00:15:50.800 "num_blocks": 26476544, 00:15:50.800 "uuid": "7401faa4-5a34-4a9a-be1d-003b79004e06", 00:15:50.800 "assigned_rate_limits": { 00:15:50.800 "rw_ios_per_sec": 0, 00:15:50.800 "rw_mbytes_per_sec": 0, 00:15:50.800 "r_mbytes_per_sec": 0, 00:15:50.800 "w_mbytes_per_sec": 0 00:15:50.800 }, 00:15:50.800 "claimed": false, 00:15:50.800 "zoned": false, 00:15:50.800 "supported_io_types": { 00:15:50.800 "read": true, 00:15:50.800 "write": true, 00:15:50.800 "unmap": true, 00:15:50.800 "write_zeroes": true, 00:15:50.800 "flush": false, 00:15:50.800 "reset": true, 00:15:50.800 "compare": false, 00:15:50.800 "compare_and_write": false, 00:15:50.800 "abort": false, 00:15:50.800 "nvme_admin": false, 00:15:50.800 "nvme_io": false 00:15:50.800 }, 00:15:50.800 "driver_specific": { 00:15:50.800 "lvol": { 00:15:50.800 "lvol_store_uuid": "29c949f6-0f88-40d7-b3ef-fd3508f2f796", 00:15:50.800 "base_bdev": "nvme0n1", 00:15:50.800 "thin_provision": true, 00:15:50.800 "num_allocated_clusters": 0, 00:15:50.800 "snapshot": false, 00:15:50.800 "clone": false, 00:15:50.800 "esnap_clone": false 00:15:50.800 } 00:15:50.800 } 00:15:50.800 } 00:15:50.800 ]' 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 7401faa4-5a34-4a9a-be1d-003b79004e06 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=7401faa4-5a34-4a9a-be1d-003b79004e06 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7401faa4-5a34-4a9a-be1d-003b79004e06 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:50.800 { 00:15:50.800 "name": "7401faa4-5a34-4a9a-be1d-003b79004e06", 00:15:50.800 "aliases": [ 00:15:50.800 "lvs/nvme0n1p0" 00:15:50.800 ], 00:15:50.800 "product_name": "Logical Volume", 00:15:50.800 "block_size": 4096, 00:15:50.800 "num_blocks": 26476544, 00:15:50.800 "uuid": "7401faa4-5a34-4a9a-be1d-003b79004e06", 00:15:50.800 "assigned_rate_limits": { 00:15:50.800 "rw_ios_per_sec": 0, 00:15:50.800 "rw_mbytes_per_sec": 0, 00:15:50.800 "r_mbytes_per_sec": 0, 00:15:50.800 "w_mbytes_per_sec": 0 00:15:50.800 }, 00:15:50.800 "claimed": false, 00:15:50.800 "zoned": false, 00:15:50.800 "supported_io_types": { 00:15:50.800 "read": true, 00:15:50.800 "write": true, 00:15:50.800 "unmap": true, 00:15:50.800 "write_zeroes": true, 00:15:50.800 "flush": false, 00:15:50.800 "reset": true, 00:15:50.800 "compare": false, 00:15:50.800 "compare_and_write": false, 00:15:50.800 "abort": false, 00:15:50.800 "nvme_admin": false, 00:15:50.800 "nvme_io": false 00:15:50.800 }, 00:15:50.800 "driver_specific": { 00:15:50.800 "lvol": { 00:15:50.800 "lvol_store_uuid": "29c949f6-0f88-40d7-b3ef-fd3508f2f796", 00:15:50.800 "base_bdev": "nvme0n1", 00:15:50.800 "thin_provision": true, 00:15:50.800 "num_allocated_clusters": 0, 00:15:50.800 "snapshot": false, 00:15:50.800 "clone": false, 00:15:50.800 "esnap_clone": false 00:15:50.800 } 00:15:50.800 } 00:15:50.800 } 00:15:50.800 ]' 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:50.800 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:51.059 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:51.059 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:51.059 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:51.059 11:57:49 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:51.059 11:57:49 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:51.059 11:57:49 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:51.059 11:57:49 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:51.059 11:57:49 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:51.059 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:51.059 11:57:49 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 7401faa4-5a34-4a9a-be1d-003b79004e06 00:15:51.059 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1374 -- # local bdev_name=7401faa4-5a34-4a9a-be1d-003b79004e06 00:15:51.059 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1375 -- # local bdev_info 00:15:51.059 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1376 -- # local bs 00:15:51.059 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1377 -- # local nb 00:15:51.059 11:57:49 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7401faa4-5a34-4a9a-be1d-003b79004e06 00:15:51.317 11:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:15:51.317 { 00:15:51.317 "name": "7401faa4-5a34-4a9a-be1d-003b79004e06", 00:15:51.317 "aliases": [ 00:15:51.317 "lvs/nvme0n1p0" 00:15:51.317 ], 00:15:51.317 "product_name": "Logical Volume", 00:15:51.317 "block_size": 4096, 00:15:51.317 "num_blocks": 26476544, 00:15:51.317 "uuid": "7401faa4-5a34-4a9a-be1d-003b79004e06", 00:15:51.317 "assigned_rate_limits": { 00:15:51.317 "rw_ios_per_sec": 0, 00:15:51.317 "rw_mbytes_per_sec": 0, 00:15:51.317 "r_mbytes_per_sec": 0, 00:15:51.317 "w_mbytes_per_sec": 0 00:15:51.317 }, 00:15:51.317 "claimed": false, 00:15:51.317 "zoned": false, 00:15:51.317 "supported_io_types": { 00:15:51.317 "read": true, 00:15:51.317 "write": true, 00:15:51.317 "unmap": true, 00:15:51.317 "write_zeroes": true, 00:15:51.317 "flush": false, 00:15:51.317 "reset": true, 00:15:51.317 "compare": false, 00:15:51.317 "compare_and_write": false, 00:15:51.317 "abort": false, 00:15:51.317 "nvme_admin": false, 00:15:51.317 "nvme_io": false 00:15:51.317 }, 00:15:51.317 "driver_specific": { 00:15:51.317 "lvol": { 00:15:51.317 "lvol_store_uuid": "29c949f6-0f88-40d7-b3ef-fd3508f2f796", 00:15:51.317 "base_bdev": "nvme0n1", 00:15:51.317 "thin_provision": true, 00:15:51.317 "num_allocated_clusters": 0, 00:15:51.317 "snapshot": false, 00:15:51.317 "clone": false, 00:15:51.317 "esnap_clone": false 00:15:51.317 } 00:15:51.317 } 00:15:51.317 } 00:15:51.317 ]' 00:15:51.317 11:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:15:51.317 11:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # bs=4096 00:15:51.317 11:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:15:51.317 11:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # nb=26476544 00:15:51.317 11:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:15:51.317 11:57:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # echo 103424 00:15:51.317 11:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:51.317 11:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:51.317 11:57:50 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7401faa4-5a34-4a9a-be1d-003b79004e06 -c nvc0n1p0 --l2p_dram_limit 60 00:15:51.577 [2024-07-21 11:57:50.299245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.577 [2024-07-21 11:57:50.299297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:51.577 [2024-07-21 11:57:50.299329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:15:51.577 [2024-07-21 11:57:50.299337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.577 [2024-07-21 11:57:50.299428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.577 [2024-07-21 11:57:50.299440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:51.577 [2024-07-21 11:57:50.299453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:15:51.577 [2024-07-21 11:57:50.299461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.577 [2024-07-21 11:57:50.299501] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:51.577 [2024-07-21 11:57:50.299860] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:51.577 [2024-07-21 11:57:50.299885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.577 [2024-07-21 11:57:50.299894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:51.577 [2024-07-21 11:57:50.299905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:15:51.577 [2024-07-21 11:57:50.299912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.577 [2024-07-21 11:57:50.300018] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID afb43c1b-0bd1-4f0a-97ca-9ed1ff88eaee 00:15:51.577 [2024-07-21 11:57:50.302528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.577 [2024-07-21 11:57:50.302562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:51.577 [2024-07-21 11:57:50.302588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:15:51.577 [2024-07-21 11:57:50.302600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.577 [2024-07-21 11:57:50.316644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.577 [2024-07-21 11:57:50.316677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:51.577 [2024-07-21 11:57:50.316689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.979 ms 00:15:51.577 [2024-07-21 11:57:50.316728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.577 [2024-07-21 11:57:50.316869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.577 [2024-07-21 11:57:50.316888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:51.577 [2024-07-21 11:57:50.316897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:15:51.577 [2024-07-21 11:57:50.316908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.577 [2024-07-21 11:57:50.317028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.577 [2024-07-21 11:57:50.317046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:51.577 [2024-07-21 11:57:50.317056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:15:51.577 [2024-07-21 11:57:50.317068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.577 [2024-07-21 11:57:50.317115] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:51.577 [2024-07-21 11:57:50.319868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.577 [2024-07-21 11:57:50.319893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:51.577 [2024-07-21 11:57:50.319905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:15:51.577 [2024-07-21 11:57:50.319913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.577 [2024-07-21 11:57:50.319969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.577 [2024-07-21 11:57:50.319982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:51.577 [2024-07-21 11:57:50.319993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:51.577 [2024-07-21 11:57:50.320000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.577 [2024-07-21 11:57:50.320041] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:51.577 [2024-07-21 11:57:50.320187] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:51.577 [2024-07-21 11:57:50.320219] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:51.577 [2024-07-21 11:57:50.320229] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:15:51.577 [2024-07-21 11:57:50.320242] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:51.577 [2024-07-21 11:57:50.320252] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:51.577 [2024-07-21 11:57:50.320266] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:51.577 [2024-07-21 11:57:50.320274] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:51.577 [2024-07-21 11:57:50.320284] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:51.577 [2024-07-21 11:57:50.320291] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:51.577 [2024-07-21 11:57:50.320301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.577 [2024-07-21 11:57:50.320309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:51.577 [2024-07-21 11:57:50.320319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:15:51.577 [2024-07-21 11:57:50.320326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.577 [2024-07-21 11:57:50.320419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.577 [2024-07-21 11:57:50.320428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:51.577 [2024-07-21 11:57:50.320444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:15:51.577 [2024-07-21 11:57:50.320464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.577 [2024-07-21 11:57:50.320587] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:51.577 [2024-07-21 11:57:50.320610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:51.577 [2024-07-21 11:57:50.320621] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:51.577 [2024-07-21 11:57:50.320629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:51.577 [2024-07-21 11:57:50.320641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:51.577 [2024-07-21 11:57:50.320648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:51.577 [2024-07-21 11:57:50.320657] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:51.577 [2024-07-21 11:57:50.320664] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:51.577 [2024-07-21 11:57:50.320673] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:51.577 [2024-07-21 11:57:50.320680] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:51.577 [2024-07-21 11:57:50.320689] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:51.577 [2024-07-21 11:57:50.320697] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:51.577 [2024-07-21 11:57:50.320706] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:51.577 [2024-07-21 11:57:50.320716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:51.577 [2024-07-21 11:57:50.320728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:51.577 [2024-07-21 11:57:50.320734] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:51.577 [2024-07-21 11:57:50.320743] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:51.577 [2024-07-21 11:57:50.320749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:51.577 [2024-07-21 11:57:50.320758] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:51.577 [2024-07-21 11:57:50.320765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:51.577 [2024-07-21 11:57:50.320774] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:51.577 [2024-07-21 11:57:50.320781] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:51.577 [2024-07-21 11:57:50.320789] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:51.577 [2024-07-21 11:57:50.320796] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:51.577 [2024-07-21 11:57:50.320805] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:51.577 [2024-07-21 11:57:50.320811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:51.577 [2024-07-21 11:57:50.320936] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:51.577 [2024-07-21 11:57:50.320964] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:51.577 [2024-07-21 11:57:50.321012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:51.577 [2024-07-21 11:57:50.321030] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:51.577 [2024-07-21 11:57:50.321055] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:51.577 [2024-07-21 11:57:50.321073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:51.577 [2024-07-21 11:57:50.321118] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:51.577 [2024-07-21 11:57:50.321137] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:51.577 [2024-07-21 11:57:50.321159] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:51.577 [2024-07-21 11:57:50.321182] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:51.577 [2024-07-21 11:57:50.321203] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:51.577 [2024-07-21 11:57:50.321221] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:51.577 [2024-07-21 11:57:50.321254] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:51.577 [2024-07-21 11:57:50.321279] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:51.577 [2024-07-21 11:57:50.321300] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:51.577 [2024-07-21 11:57:50.321318] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:51.577 [2024-07-21 11:57:50.321353] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:51.577 [2024-07-21 11:57:50.321383] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:51.577 [2024-07-21 11:57:50.321413] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:51.577 [2024-07-21 11:57:50.321436] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:51.577 [2024-07-21 11:57:50.321467] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:51.577 [2024-07-21 11:57:50.321497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:51.577 [2024-07-21 11:57:50.321523] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:51.577 [2024-07-21 11:57:50.321549] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:51.577 [2024-07-21 11:57:50.321572] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:51.577 [2024-07-21 11:57:50.321591] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:51.577 [2024-07-21 11:57:50.321615] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:51.577 [2024-07-21 11:57:50.321647] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:51.577 [2024-07-21 11:57:50.321706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:51.577 [2024-07-21 11:57:50.321766] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:51.577 [2024-07-21 11:57:50.321808] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:51.577 [2024-07-21 11:57:50.321859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:51.577 [2024-07-21 11:57:50.321893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:51.577 [2024-07-21 11:57:50.321931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:51.577 [2024-07-21 11:57:50.321968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:51.577 [2024-07-21 11:57:50.322006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:51.577 [2024-07-21 11:57:50.322045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:51.577 [2024-07-21 11:57:50.322081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:51.577 [2024-07-21 11:57:50.322115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:51.577 [2024-07-21 11:57:50.322152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:51.577 [2024-07-21 11:57:50.322193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:51.577 [2024-07-21 11:57:50.322225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:51.577 [2024-07-21 11:57:50.322264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:51.577 [2024-07-21 11:57:50.322307] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:51.577 [2024-07-21 11:57:50.322365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:51.577 [2024-07-21 11:57:50.322401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:51.577 [2024-07-21 11:57:50.322441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:51.577 [2024-07-21 11:57:50.322478] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:51.577 [2024-07-21 11:57:50.322490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:51.577 [2024-07-21 11:57:50.322500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:51.577 [2024-07-21 11:57:50.322510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:51.577 [2024-07-21 11:57:50.322521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.969 ms 00:15:51.577 [2024-07-21 11:57:50.322554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:51.577 [2024-07-21 11:57:50.322664] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:51.577 [2024-07-21 11:57:50.322678] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:54.143 [2024-07-21 11:57:52.619858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.143 [2024-07-21 11:57:52.620032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:54.143 [2024-07-21 11:57:52.620071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2301.619 ms 00:15:54.143 [2024-07-21 11:57:52.620108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.143 [2024-07-21 11:57:52.640236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.143 [2024-07-21 11:57:52.640382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:54.143 [2024-07-21 11:57:52.640417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.009 ms 00:15:54.143 [2024-07-21 11:57:52.640445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.143 [2024-07-21 11:57:52.640611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.143 [2024-07-21 11:57:52.640654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:54.143 [2024-07-21 11:57:52.640709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:15:54.143 [2024-07-21 11:57:52.640757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.143 [2024-07-21 11:57:52.667600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.143 [2024-07-21 11:57:52.667719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:54.143 [2024-07-21 11:57:52.667768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.795 ms 00:15:54.143 [2024-07-21 11:57:52.667792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.143 [2024-07-21 11:57:52.667869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.143 [2024-07-21 11:57:52.667912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:54.144 [2024-07-21 11:57:52.667943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:54.144 [2024-07-21 11:57:52.667966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.668805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.668891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:54.144 [2024-07-21 11:57:52.668935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:15:54.144 [2024-07-21 11:57:52.668983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.669168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.669220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:54.144 [2024-07-21 11:57:52.669258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:15:54.144 [2024-07-21 11:57:52.669294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.681867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.681959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:54.144 [2024-07-21 11:57:52.681998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.512 ms 00:15:54.144 [2024-07-21 11:57:52.682021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.691126] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:54.144 [2024-07-21 11:57:52.717764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.717902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:54.144 [2024-07-21 11:57:52.717939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.651 ms 00:15:54.144 [2024-07-21 11:57:52.717965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.766258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.766355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:54.144 [2024-07-21 11:57:52.766399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.311 ms 00:15:54.144 [2024-07-21 11:57:52.766419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.766668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.766726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:54.144 [2024-07-21 11:57:52.766774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:15:54.144 [2024-07-21 11:57:52.766800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.770311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.770383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:54.144 [2024-07-21 11:57:52.770413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.431 ms 00:15:54.144 [2024-07-21 11:57:52.770433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.773174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.773237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:54.144 [2024-07-21 11:57:52.773276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.683 ms 00:15:54.144 [2024-07-21 11:57:52.773295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.773616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.773659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:54.144 [2024-07-21 11:57:52.773692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:15:54.144 [2024-07-21 11:57:52.773725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.815271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.815354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:54.144 [2024-07-21 11:57:52.815390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.554 ms 00:15:54.144 [2024-07-21 11:57:52.815411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.821441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.821512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:54.144 [2024-07-21 11:57:52.821560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.975 ms 00:15:54.144 [2024-07-21 11:57:52.821580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.825254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.825319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:54.144 [2024-07-21 11:57:52.825349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.621 ms 00:15:54.144 [2024-07-21 11:57:52.825368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.829188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.829252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:54.144 [2024-07-21 11:57:52.829283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.763 ms 00:15:54.144 [2024-07-21 11:57:52.829304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.829375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.829400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:54.144 [2024-07-21 11:57:52.829443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:54.144 [2024-07-21 11:57:52.829490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.829615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.144 [2024-07-21 11:57:52.829648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:54.144 [2024-07-21 11:57:52.829688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:15:54.144 [2024-07-21 11:57:52.829710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.144 [2024-07-21 11:57:52.831232] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2536.358 ms, result 0 00:15:54.144 { 00:15:54.144 "name": "ftl0", 00:15:54.144 "uuid": "afb43c1b-0bd1-4f0a-97ca-9ed1ff88eaee" 00:15:54.144 } 00:15:54.144 11:57:52 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:54.144 11:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:15:54.144 11:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:15:54.144 11:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@897 -- # local i 00:15:54.144 11:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:15:54.144 11:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:15:54.144 11:57:52 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:54.402 11:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:54.402 [ 00:15:54.402 { 00:15:54.402 "name": "ftl0", 00:15:54.402 "aliases": [ 00:15:54.402 "afb43c1b-0bd1-4f0a-97ca-9ed1ff88eaee" 00:15:54.402 ], 00:15:54.402 "product_name": "FTL disk", 00:15:54.402 "block_size": 4096, 00:15:54.402 "num_blocks": 20971520, 00:15:54.402 "uuid": "afb43c1b-0bd1-4f0a-97ca-9ed1ff88eaee", 00:15:54.402 "assigned_rate_limits": { 00:15:54.402 "rw_ios_per_sec": 0, 00:15:54.402 "rw_mbytes_per_sec": 0, 00:15:54.402 "r_mbytes_per_sec": 0, 00:15:54.402 "w_mbytes_per_sec": 0 00:15:54.402 }, 00:15:54.402 "claimed": false, 00:15:54.402 "zoned": false, 00:15:54.402 "supported_io_types": { 00:15:54.402 "read": true, 00:15:54.402 "write": true, 00:15:54.402 "unmap": true, 00:15:54.402 "write_zeroes": true, 00:15:54.402 "flush": true, 00:15:54.402 "reset": false, 00:15:54.402 "compare": false, 00:15:54.402 "compare_and_write": false, 00:15:54.402 "abort": false, 00:15:54.402 "nvme_admin": false, 00:15:54.402 "nvme_io": false 00:15:54.402 }, 00:15:54.402 "driver_specific": { 00:15:54.402 "ftl": { 00:15:54.402 "base_bdev": "7401faa4-5a34-4a9a-be1d-003b79004e06", 00:15:54.402 "cache": "nvc0n1p0" 00:15:54.402 } 00:15:54.402 } 00:15:54.402 } 00:15:54.402 ] 00:15:54.402 11:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # return 0 00:15:54.402 11:57:53 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:54.402 11:57:53 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:54.659 11:57:53 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:15:54.659 11:57:53 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:54.918 [2024-07-21 11:57:53.572958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.918 [2024-07-21 11:57:53.573114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:54.918 [2024-07-21 11:57:53.573151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:54.918 [2024-07-21 11:57:53.573175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.918 [2024-07-21 11:57:53.573229] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:54.918 [2024-07-21 11:57:53.574544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.918 [2024-07-21 11:57:53.574590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:54.918 [2024-07-21 11:57:53.574624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.263 ms 00:15:54.918 [2024-07-21 11:57:53.574648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.918 [2024-07-21 11:57:53.575252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.918 [2024-07-21 11:57:53.575299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:54.918 [2024-07-21 11:57:53.575333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:15:54.918 [2024-07-21 11:57:53.575391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.918 [2024-07-21 11:57:53.577897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.918 [2024-07-21 11:57:53.577947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:54.918 [2024-07-21 11:57:53.577966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.445 ms 00:15:54.918 [2024-07-21 11:57:53.577989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.918 [2024-07-21 11:57:53.582924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.918 [2024-07-21 11:57:53.582954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:15:54.918 [2024-07-21 11:57:53.582966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.896 ms 00:15:54.918 [2024-07-21 11:57:53.583001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.918 [2024-07-21 11:57:53.584774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.918 [2024-07-21 11:57:53.584809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:54.918 [2024-07-21 11:57:53.584844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.622 ms 00:15:54.918 [2024-07-21 11:57:53.584852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.918 [2024-07-21 11:57:53.590756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.918 [2024-07-21 11:57:53.590791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:54.918 [2024-07-21 11:57:53.590833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.871 ms 00:15:54.918 [2024-07-21 11:57:53.590842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.918 [2024-07-21 11:57:53.591040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.918 [2024-07-21 11:57:53.591053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:54.918 [2024-07-21 11:57:53.591065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:15:54.918 [2024-07-21 11:57:53.591080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.918 [2024-07-21 11:57:53.593226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.918 [2024-07-21 11:57:53.593254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:54.918 [2024-07-21 11:57:53.593266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.117 ms 00:15:54.918 [2024-07-21 11:57:53.593273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.918 [2024-07-21 11:57:53.594839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.918 [2024-07-21 11:57:53.594867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:54.919 [2024-07-21 11:57:53.594882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.522 ms 00:15:54.919 [2024-07-21 11:57:53.594888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.919 [2024-07-21 11:57:53.596097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.919 [2024-07-21 11:57:53.596124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:54.919 [2024-07-21 11:57:53.596135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.165 ms 00:15:54.919 [2024-07-21 11:57:53.596142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.919 [2024-07-21 11:57:53.597299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.919 [2024-07-21 11:57:53.597325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:54.919 [2024-07-21 11:57:53.597336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:15:54.919 [2024-07-21 11:57:53.597343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.919 [2024-07-21 11:57:53.597386] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:54.919 [2024-07-21 11:57:53.597401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.597991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:54.919 [2024-07-21 11:57:53.598217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:54.920 [2024-07-21 11:57:53.598389] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:54.920 [2024-07-21 11:57:53.598403] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: afb43c1b-0bd1-4f0a-97ca-9ed1ff88eaee 00:15:54.920 [2024-07-21 11:57:53.598413] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:54.920 [2024-07-21 11:57:53.598425] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:54.920 [2024-07-21 11:57:53.598433] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:54.920 [2024-07-21 11:57:53.598443] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:54.920 [2024-07-21 11:57:53.598450] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:54.920 [2024-07-21 11:57:53.598461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:54.920 [2024-07-21 11:57:53.598468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:54.920 [2024-07-21 11:57:53.598477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:54.920 [2024-07-21 11:57:53.598484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:54.920 [2024-07-21 11:57:53.598493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.920 [2024-07-21 11:57:53.598501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:54.920 [2024-07-21 11:57:53.598512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:15:54.920 [2024-07-21 11:57:53.598532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.601869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.920 [2024-07-21 11:57:53.601928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:54.920 [2024-07-21 11:57:53.601960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.304 ms 00:15:54.920 [2024-07-21 11:57:53.601980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.602198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:54.920 [2024-07-21 11:57:53.602232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:54.920 [2024-07-21 11:57:53.602266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:15:54.920 [2024-07-21 11:57:53.602287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.613174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.920 [2024-07-21 11:57:53.613249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:54.920 [2024-07-21 11:57:53.613281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.920 [2024-07-21 11:57:53.613301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.613385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.920 [2024-07-21 11:57:53.613407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:54.920 [2024-07-21 11:57:53.613435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.920 [2024-07-21 11:57:53.613463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.613599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.920 [2024-07-21 11:57:53.613640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:54.920 [2024-07-21 11:57:53.613677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.920 [2024-07-21 11:57:53.613703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.613764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.920 [2024-07-21 11:57:53.613793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:54.920 [2024-07-21 11:57:53.613831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.920 [2024-07-21 11:57:53.613873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.639581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.920 [2024-07-21 11:57:53.639693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:54.920 [2024-07-21 11:57:53.639726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.920 [2024-07-21 11:57:53.639765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.653290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.920 [2024-07-21 11:57:53.653376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:54.920 [2024-07-21 11:57:53.653408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.920 [2024-07-21 11:57:53.653429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.653559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.920 [2024-07-21 11:57:53.653594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:54.920 [2024-07-21 11:57:53.653629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.920 [2024-07-21 11:57:53.653659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.653787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.920 [2024-07-21 11:57:53.653892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:54.920 [2024-07-21 11:57:53.653929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.920 [2024-07-21 11:57:53.653962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.654124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.920 [2024-07-21 11:57:53.654165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:54.920 [2024-07-21 11:57:53.654199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.920 [2024-07-21 11:57:53.654224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.654310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.920 [2024-07-21 11:57:53.654344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:54.920 [2024-07-21 11:57:53.654376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.920 [2024-07-21 11:57:53.654402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.654492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.920 [2024-07-21 11:57:53.654523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:54.920 [2024-07-21 11:57:53.654559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.920 [2024-07-21 11:57:53.654585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.654680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:54.920 [2024-07-21 11:57:53.654712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:54.920 [2024-07-21 11:57:53.654764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:54.920 [2024-07-21 11:57:53.654806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:54.920 [2024-07-21 11:57:53.655104] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 82.228 ms, result 0 00:15:54.920 true 00:15:54.920 11:57:53 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 88423 00:15:54.920 11:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@946 -- # '[' -z 88423 ']' 00:15:54.920 11:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # kill -0 88423 00:15:54.920 11:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # uname 00:15:54.920 11:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:54.920 11:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 88423 00:15:54.920 killing process with pid 88423 00:15:54.920 11:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:54.920 11:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:54.920 11:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 88423' 00:15:54.920 11:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@965 -- # kill 88423 00:15:54.920 11:57:53 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # wait 88423 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:01.486 11:57:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:01.486 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:01.486 fio-3.35 00:16:01.486 Starting 1 thread 00:16:05.677 00:16:05.677 test: (groupid=0, jobs=1): err= 0: pid=88635: Sun Jul 21 11:58:03 2024 00:16:05.677 read: IOPS=989, BW=65.7MiB/s (68.9MB/s)(255MiB/3874msec) 00:16:05.677 slat (nsec): min=4187, max=27415, avg=6208.76, stdev=2474.18 00:16:05.677 clat (usec): min=302, max=955, avg=457.75, stdev=52.20 00:16:05.677 lat (usec): min=307, max=967, avg=463.96, stdev=52.44 00:16:05.677 clat percentiles (usec): 00:16:05.677 | 1.00th=[ 367], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 420], 00:16:05.677 | 30.00th=[ 441], 40.00th=[ 445], 50.00th=[ 453], 60.00th=[ 461], 00:16:05.677 | 70.00th=[ 490], 80.00th=[ 506], 90.00th=[ 519], 95.00th=[ 529], 00:16:05.677 | 99.00th=[ 562], 99.50th=[ 578], 99.90th=[ 799], 99.95th=[ 848], 00:16:05.677 | 99.99th=[ 955] 00:16:05.677 write: IOPS=996, BW=66.2MiB/s (69.4MB/s)(256MiB/3869msec); 0 zone resets 00:16:05.677 slat (nsec): min=15567, max=95923, avg=20510.21, stdev=5323.68 00:16:05.677 clat (usec): min=346, max=951, avg=511.03, stdev=66.27 00:16:05.677 lat (usec): min=376, max=993, avg=531.54, stdev=67.00 00:16:05.677 clat percentiles (usec): 00:16:05.677 | 1.00th=[ 392], 5.00th=[ 412], 10.00th=[ 453], 20.00th=[ 461], 00:16:05.677 | 30.00th=[ 469], 40.00th=[ 486], 50.00th=[ 523], 60.00th=[ 529], 00:16:05.677 | 70.00th=[ 537], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 603], 00:16:05.677 | 99.00th=[ 807], 99.50th=[ 840], 99.90th=[ 922], 99.95th=[ 938], 00:16:05.677 | 99.99th=[ 955] 00:16:05.677 bw ( KiB/s): min=65556, max=69360, per=100.00%, avg=67806.29, stdev=1232.46, samples=7 00:16:05.677 iops : min= 964, max= 1020, avg=997.14, stdev=18.14, samples=7 00:16:05.677 lat (usec) : 500=57.51%, 750=41.44%, 1000=1.05% 00:16:05.677 cpu : usr=99.28%, sys=0.10%, ctx=24, majf=0, minf=1181 00:16:05.677 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:05.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.677 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:05.677 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:05.677 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:05.677 00:16:05.677 Run status group 0 (all jobs): 00:16:05.677 READ: bw=65.7MiB/s (68.9MB/s), 65.7MiB/s-65.7MiB/s (68.9MB/s-68.9MB/s), io=255MiB (267MB), run=3874-3874msec 00:16:05.677 WRITE: bw=66.2MiB/s (69.4MB/s), 66.2MiB/s-66.2MiB/s (69.4MB/s-69.4MB/s), io=256MiB (269MB), run=3869-3869msec 00:16:05.947 ----------------------------------------------------- 00:16:05.947 Suppressions used: 00:16:05.947 count bytes template 00:16:05.947 1 5 /usr/src/fio/parse.c 00:16:05.947 1 8 libtcmalloc_minimal.so 00:16:05.947 1 904 libcrypto.so 00:16:05.947 ----------------------------------------------------- 00:16:05.947 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:16:05.947 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:05.948 11:58:04 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:06.206 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:06.206 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:06.206 fio-3.35 00:16:06.206 Starting 2 threads 00:16:32.762 00:16:32.762 first_half: (groupid=0, jobs=1): err= 0: pid=88720: Sun Jul 21 11:58:29 2024 00:16:32.762 read: IOPS=2686, BW=10.5MiB/s (11.0MB/s)(255MiB/24293msec) 00:16:32.762 slat (nsec): min=3599, max=93901, avg=6440.18, stdev=2083.93 00:16:32.762 clat (usec): min=1113, max=281023, avg=35318.28, stdev=19837.50 00:16:32.762 lat (usec): min=1122, max=281031, avg=35324.72, stdev=19837.86 00:16:32.762 clat percentiles (msec): 00:16:32.762 | 1.00th=[ 9], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 32], 00:16:32.762 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 33], 00:16:32.762 | 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 39], 95.00th=[ 46], 00:16:32.762 | 99.00th=[ 157], 99.50th=[ 171], 99.90th=[ 190], 99.95th=[ 203], 00:16:32.762 | 99.99th=[ 268] 00:16:32.762 write: IOPS=3351, BW=13.1MiB/s (13.7MB/s)(256MiB/19552msec); 0 zone resets 00:16:32.762 slat (usec): min=4, max=658, avg= 8.90, stdev= 7.50 00:16:32.762 clat (usec): min=392, max=116760, avg=12170.43, stdev=20208.08 00:16:32.762 lat (usec): min=402, max=116768, avg=12179.33, stdev=20208.33 00:16:32.762 clat percentiles (usec): 00:16:32.762 | 1.00th=[ 955], 5.00th=[ 1319], 10.00th=[ 1614], 20.00th=[ 1958], 00:16:32.762 | 30.00th=[ 2507], 40.00th=[ 4555], 50.00th=[ 5997], 60.00th=[ 7177], 00:16:32.762 | 70.00th=[ 8717], 80.00th=[ 12387], 90.00th=[ 29754], 95.00th=[ 77071], 00:16:32.762 | 99.00th=[ 91751], 99.50th=[ 93848], 99.90th=[111674], 99.95th=[113771], 00:16:32.762 | 99.99th=[115868] 00:16:32.762 bw ( KiB/s): min= 1824, max=43448, per=81.09%, avg=20971.52, stdev=12752.79, samples=25 00:16:32.762 iops : min= 456, max=10862, avg=5242.88, stdev=3188.20, samples=25 00:16:32.762 lat (usec) : 500=0.01%, 750=0.10%, 1000=0.57% 00:16:32.762 lat (msec) : 2=9.92%, 4=8.18%, 10=19.51%, 20=8.09%, 50=48.09% 00:16:32.762 lat (msec) : 100=4.16%, 250=1.37%, 500=0.01% 00:16:32.762 cpu : usr=99.07%, sys=0.27%, ctx=264, majf=0, minf=5575 00:16:32.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:32.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.762 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.762 issued rwts: total=65261,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.762 second_half: (groupid=0, jobs=1): err= 0: pid=88721: Sun Jul 21 11:58:29 2024 00:16:32.762 read: IOPS=2704, BW=10.6MiB/s (11.1MB/s)(255MiB/24150msec) 00:16:32.762 slat (nsec): min=3790, max=31896, avg=6909.82, stdev=2087.96 00:16:32.762 clat (usec): min=1070, max=288545, avg=36522.01, stdev=20150.87 00:16:32.762 lat (usec): min=1079, max=288553, avg=36528.92, stdev=20151.21 00:16:32.762 clat percentiles (msec): 00:16:32.762 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 32], 00:16:32.762 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33], 00:16:32.762 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 39], 95.00th=[ 59], 00:16:32.762 | 99.00th=[ 153], 99.50th=[ 174], 99.90th=[ 213], 99.95th=[ 241], 00:16:32.762 | 99.99th=[ 284] 00:16:32.762 write: IOPS=3232, BW=12.6MiB/s (13.2MB/s)(256MiB/20274msec); 0 zone resets 00:16:32.762 slat (usec): min=4, max=724, avg= 9.28, stdev= 6.83 00:16:32.762 clat (usec): min=432, max=116528, avg=10742.66, stdev=19345.73 00:16:32.762 lat (usec): min=440, max=116535, avg=10751.94, stdev=19345.83 00:16:32.762 clat percentiles (usec): 00:16:32.762 | 1.00th=[ 1057], 5.00th=[ 1500], 10.00th=[ 1762], 20.00th=[ 2114], 00:16:32.762 | 30.00th=[ 2769], 40.00th=[ 4228], 50.00th=[ 5473], 60.00th=[ 6587], 00:16:32.762 | 70.00th=[ 7242], 80.00th=[ 10290], 90.00th=[ 14615], 95.00th=[ 77071], 00:16:32.762 | 99.00th=[ 90702], 99.50th=[ 94897], 99.90th=[108528], 99.95th=[113771], 00:16:32.762 | 99.99th=[115868] 00:16:32.762 bw ( KiB/s): min= 112, max=40888, per=88.15%, avg=22795.13, stdev=12701.65, samples=23 00:16:32.762 iops : min= 28, max=10222, avg=5698.78, stdev=3175.41, samples=23 00:16:32.762 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.28% 00:16:32.762 lat (msec) : 2=8.44%, 4=10.44%, 10=21.02%, 20=6.71%, 50=46.91% 00:16:32.762 lat (msec) : 100=4.74%, 250=1.42%, 500=0.01% 00:16:32.762 cpu : usr=99.18%, sys=0.19%, ctx=36, majf=0, minf=5565 00:16:32.762 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:32.762 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:32.762 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:32.762 issued rwts: total=65312,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:32.762 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:32.762 00:16:32.762 Run status group 0 (all jobs): 00:16:32.762 READ: bw=21.0MiB/s (22.0MB/s), 10.5MiB/s-10.6MiB/s (11.0MB/s-11.1MB/s), io=510MiB (535MB), run=24150-24293msec 00:16:32.762 WRITE: bw=25.3MiB/s (26.5MB/s), 12.6MiB/s-13.1MiB/s (13.2MB/s-13.7MB/s), io=512MiB (537MB), run=19552-20274msec 00:16:32.762 ----------------------------------------------------- 00:16:32.762 Suppressions used: 00:16:32.762 count bytes template 00:16:32.762 2 10 /usr/src/fio/parse.c 00:16:32.762 3 288 /usr/src/fio/iolog.c 00:16:32.762 1 8 libtcmalloc_minimal.so 00:16:32.762 1 904 libcrypto.so 00:16:32.762 ----------------------------------------------------- 00:16:32.762 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1335 -- # local sanitizers 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1336 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # shift 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local asan_lib= 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # grep libasan 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # break 00:16:32.762 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:32.763 11:58:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:33.020 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:33.020 fio-3.35 00:16:33.020 Starting 1 thread 00:16:47.904 00:16:47.904 test: (groupid=0, jobs=1): err= 0: pid=89027: Sun Jul 21 11:58:45 2024 00:16:47.904 read: IOPS=7785, BW=30.4MiB/s (31.9MB/s)(255MiB/8375msec) 00:16:47.904 slat (nsec): min=3543, max=30115, avg=5592.74, stdev=1667.28 00:16:47.904 clat (usec): min=740, max=33361, avg=16430.56, stdev=1365.64 00:16:47.904 lat (usec): min=744, max=33366, avg=16436.15, stdev=1365.66 00:16:47.904 clat percentiles (usec): 00:16:47.904 | 1.00th=[15139], 5.00th=[15401], 10.00th=[15533], 20.00th=[15664], 00:16:47.904 | 30.00th=[15795], 40.00th=[15926], 50.00th=[16188], 60.00th=[16319], 00:16:47.904 | 70.00th=[16581], 80.00th=[17171], 90.00th=[17695], 95.00th=[17957], 00:16:47.904 | 99.00th=[20317], 99.50th=[26084], 99.90th=[30802], 99.95th=[31851], 00:16:47.904 | 99.99th=[32637] 00:16:47.904 write: IOPS=12.8k, BW=50.1MiB/s (52.5MB/s)(256MiB/5109msec); 0 zone resets 00:16:47.904 slat (usec): min=4, max=551, avg= 8.44, stdev= 5.51 00:16:47.904 clat (usec): min=676, max=59853, avg=9928.85, stdev=12198.00 00:16:47.904 lat (usec): min=691, max=59860, avg=9937.29, stdev=12198.02 00:16:47.904 clat percentiles (usec): 00:16:47.904 | 1.00th=[ 1020], 5.00th=[ 1237], 10.00th=[ 1401], 20.00th=[ 1582], 00:16:47.904 | 30.00th=[ 1762], 40.00th=[ 2114], 50.00th=[ 6587], 60.00th=[ 7635], 00:16:47.904 | 70.00th=[ 8586], 80.00th=[10421], 90.00th=[35914], 95.00th=[37487], 00:16:47.904 | 99.00th=[40109], 99.50th=[45351], 99.90th=[56361], 99.95th=[58459], 00:16:47.904 | 99.99th=[59507] 00:16:47.904 bw ( KiB/s): min= 8656, max=68928, per=92.89%, avg=47662.55, stdev=15481.29, samples=11 00:16:47.904 iops : min= 2164, max=17232, avg=11915.64, stdev=3870.32, samples=11 00:16:47.904 lat (usec) : 750=0.01%, 1000=0.39% 00:16:47.904 lat (msec) : 2=18.91%, 4=1.77%, 10=18.04%, 20=52.35%, 50=8.37% 00:16:47.904 lat (msec) : 100=0.17% 00:16:47.904 cpu : usr=99.01%, sys=0.27%, ctx=20, majf=0, minf=5577 00:16:47.904 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:47.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:47.904 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:47.904 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:47.904 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:47.904 00:16:47.904 Run status group 0 (all jobs): 00:16:47.904 READ: bw=30.4MiB/s (31.9MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=255MiB (267MB), run=8375-8375msec 00:16:47.904 WRITE: bw=50.1MiB/s (52.5MB/s), 50.1MiB/s-50.1MiB/s (52.5MB/s-52.5MB/s), io=256MiB (268MB), run=5109-5109msec 00:16:47.904 ----------------------------------------------------- 00:16:47.904 Suppressions used: 00:16:47.904 count bytes template 00:16:47.904 1 5 /usr/src/fio/parse.c 00:16:47.904 2 192 /usr/src/fio/iolog.c 00:16:47.904 1 8 libtcmalloc_minimal.so 00:16:47.904 1 904 libcrypto.so 00:16:47.904 ----------------------------------------------------- 00:16:47.904 00:16:47.904 11:58:46 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:47.904 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:47.904 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:48.163 11:58:46 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:48.163 Remove shared memory files 00:16:48.163 11:58:46 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:48.163 11:58:46 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:48.163 11:58:46 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:48.163 11:58:46 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:48.163 11:58:46 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid74489 /dev/shm/spdk_tgt_trace.pid87392 00:16:48.163 11:58:46 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:48.163 11:58:46 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:48.163 ************************************ 00:16:48.163 END TEST ftl_fio_basic 00:16:48.163 ************************************ 00:16:48.163 00:16:48.163 real 1m0.079s 00:16:48.163 user 2m14.792s 00:16:48.163 sys 0m3.290s 00:16:48.163 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:48.163 11:58:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:48.163 11:58:46 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:48.163 11:58:46 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:16:48.163 11:58:46 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:48.163 11:58:46 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:48.163 ************************************ 00:16:48.163 START TEST ftl_bdevperf 00:16:48.163 ************************************ 00:16:48.163 11:58:46 ftl.ftl_bdevperf -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:48.163 * Looking for test storage... 00:16:48.163 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:48.163 11:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:48.163 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@19 -- # bdevperf_pid=89261 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # waitforlisten 89261 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 89261 ']' 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:48.422 11:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:48.422 [2024-07-21 11:58:47.138805] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:16:48.422 [2024-07-21 11:58:47.139025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89261 ] 00:16:48.680 [2024-07-21 11:58:47.301781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.680 [2024-07-21 11:58:47.345271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:49.247 11:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:49.247 11:58:47 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:16:49.247 11:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:49.247 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:49.247 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:49.247 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:49.247 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:49.247 11:58:47 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:49.506 11:58:48 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:49.506 11:58:48 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:49.506 11:58:48 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:49.506 11:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:16:49.506 11:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:49.506 11:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:49.506 11:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:49.506 11:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:49.765 11:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:49.765 { 00:16:49.765 "name": "nvme0n1", 00:16:49.765 "aliases": [ 00:16:49.765 "49fc81c6-e96b-40e7-a70a-df140f695ad9" 00:16:49.765 ], 00:16:49.765 "product_name": "NVMe disk", 00:16:49.765 "block_size": 4096, 00:16:49.765 "num_blocks": 1310720, 00:16:49.765 "uuid": "49fc81c6-e96b-40e7-a70a-df140f695ad9", 00:16:49.765 "assigned_rate_limits": { 00:16:49.765 "rw_ios_per_sec": 0, 00:16:49.765 "rw_mbytes_per_sec": 0, 00:16:49.765 "r_mbytes_per_sec": 0, 00:16:49.765 "w_mbytes_per_sec": 0 00:16:49.765 }, 00:16:49.765 "claimed": true, 00:16:49.765 "claim_type": "read_many_write_one", 00:16:49.765 "zoned": false, 00:16:49.765 "supported_io_types": { 00:16:49.765 "read": true, 00:16:49.765 "write": true, 00:16:49.765 "unmap": true, 00:16:49.765 "write_zeroes": true, 00:16:49.765 "flush": true, 00:16:49.765 "reset": true, 00:16:49.765 "compare": true, 00:16:49.765 "compare_and_write": false, 00:16:49.765 "abort": true, 00:16:49.765 "nvme_admin": true, 00:16:49.765 "nvme_io": true 00:16:49.765 }, 00:16:49.765 "driver_specific": { 00:16:49.765 "nvme": [ 00:16:49.765 { 00:16:49.765 "pci_address": "0000:00:11.0", 00:16:49.765 "trid": { 00:16:49.765 "trtype": "PCIe", 00:16:49.765 "traddr": "0000:00:11.0" 00:16:49.765 }, 00:16:49.765 "ctrlr_data": { 00:16:49.765 "cntlid": 0, 00:16:49.765 "vendor_id": "0x1b36", 00:16:49.765 "model_number": "QEMU NVMe Ctrl", 00:16:49.765 "serial_number": "12341", 00:16:49.765 "firmware_revision": "8.0.0", 00:16:49.765 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:49.765 "oacs": { 00:16:49.765 "security": 0, 00:16:49.765 "format": 1, 00:16:49.765 "firmware": 0, 00:16:49.765 "ns_manage": 1 00:16:49.765 }, 00:16:49.765 "multi_ctrlr": false, 00:16:49.765 "ana_reporting": false 00:16:49.765 }, 00:16:49.765 "vs": { 00:16:49.765 "nvme_version": "1.4" 00:16:49.765 }, 00:16:49.765 "ns_data": { 00:16:49.765 "id": 1, 00:16:49.765 "can_share": false 00:16:49.765 } 00:16:49.765 } 00:16:49.765 ], 00:16:49.765 "mp_policy": "active_passive" 00:16:49.765 } 00:16:49.765 } 00:16:49.765 ]' 00:16:49.765 11:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:49.765 11:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:49.765 11:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:49.765 11:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=1310720 00:16:49.765 11:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:16:49.765 11:58:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 5120 00:16:49.765 11:58:48 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:49.765 11:58:48 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:49.765 11:58:48 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:49.765 11:58:48 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:49.765 11:58:48 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:50.024 11:58:48 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=29c949f6-0f88-40d7-b3ef-fd3508f2f796 00:16:50.024 11:58:48 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:50.024 11:58:48 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 29c949f6-0f88-40d7-b3ef-fd3508f2f796 00:16:50.024 11:58:48 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:50.282 11:58:49 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=2c407c8a-39be-44e5-8876-8f119937909f 00:16:50.282 11:58:49 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 2c407c8a-39be-44e5-8876-8f119937909f 00:16:50.539 11:58:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # split_bdev=9d4d3f09-994a-439d-9365-92f91afe091e 00:16:50.539 11:58:49 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9d4d3f09-994a-439d-9365-92f91afe091e 00:16:50.539 11:58:49 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:50.539 11:58:49 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:50.539 11:58:49 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=9d4d3f09-994a-439d-9365-92f91afe091e 00:16:50.540 11:58:49 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:50.540 11:58:49 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 9d4d3f09-994a-439d-9365-92f91afe091e 00:16:50.540 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=9d4d3f09-994a-439d-9365-92f91afe091e 00:16:50.540 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:50.540 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:50.540 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:50.540 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d4d3f09-994a-439d-9365-92f91afe091e 00:16:50.798 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:50.798 { 00:16:50.798 "name": "9d4d3f09-994a-439d-9365-92f91afe091e", 00:16:50.798 "aliases": [ 00:16:50.798 "lvs/nvme0n1p0" 00:16:50.798 ], 00:16:50.798 "product_name": "Logical Volume", 00:16:50.798 "block_size": 4096, 00:16:50.798 "num_blocks": 26476544, 00:16:50.798 "uuid": "9d4d3f09-994a-439d-9365-92f91afe091e", 00:16:50.798 "assigned_rate_limits": { 00:16:50.798 "rw_ios_per_sec": 0, 00:16:50.798 "rw_mbytes_per_sec": 0, 00:16:50.798 "r_mbytes_per_sec": 0, 00:16:50.798 "w_mbytes_per_sec": 0 00:16:50.798 }, 00:16:50.798 "claimed": false, 00:16:50.798 "zoned": false, 00:16:50.798 "supported_io_types": { 00:16:50.798 "read": true, 00:16:50.798 "write": true, 00:16:50.798 "unmap": true, 00:16:50.798 "write_zeroes": true, 00:16:50.798 "flush": false, 00:16:50.798 "reset": true, 00:16:50.798 "compare": false, 00:16:50.798 "compare_and_write": false, 00:16:50.798 "abort": false, 00:16:50.798 "nvme_admin": false, 00:16:50.798 "nvme_io": false 00:16:50.798 }, 00:16:50.798 "driver_specific": { 00:16:50.798 "lvol": { 00:16:50.798 "lvol_store_uuid": "2c407c8a-39be-44e5-8876-8f119937909f", 00:16:50.798 "base_bdev": "nvme0n1", 00:16:50.798 "thin_provision": true, 00:16:50.798 "num_allocated_clusters": 0, 00:16:50.798 "snapshot": false, 00:16:50.798 "clone": false, 00:16:50.798 "esnap_clone": false 00:16:50.798 } 00:16:50.798 } 00:16:50.798 } 00:16:50.798 ]' 00:16:50.798 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:50.798 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:50.798 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:50.798 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:50.798 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:50.798 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:16:50.798 11:58:49 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:50.798 11:58:49 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:50.798 11:58:49 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:51.057 11:58:49 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:51.057 11:58:49 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:51.057 11:58:49 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 9d4d3f09-994a-439d-9365-92f91afe091e 00:16:51.057 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=9d4d3f09-994a-439d-9365-92f91afe091e 00:16:51.057 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:51.057 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:51.057 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:51.057 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d4d3f09-994a-439d-9365-92f91afe091e 00:16:51.316 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:51.316 { 00:16:51.316 "name": "9d4d3f09-994a-439d-9365-92f91afe091e", 00:16:51.316 "aliases": [ 00:16:51.316 "lvs/nvme0n1p0" 00:16:51.316 ], 00:16:51.316 "product_name": "Logical Volume", 00:16:51.316 "block_size": 4096, 00:16:51.316 "num_blocks": 26476544, 00:16:51.316 "uuid": "9d4d3f09-994a-439d-9365-92f91afe091e", 00:16:51.316 "assigned_rate_limits": { 00:16:51.316 "rw_ios_per_sec": 0, 00:16:51.316 "rw_mbytes_per_sec": 0, 00:16:51.316 "r_mbytes_per_sec": 0, 00:16:51.316 "w_mbytes_per_sec": 0 00:16:51.316 }, 00:16:51.316 "claimed": false, 00:16:51.316 "zoned": false, 00:16:51.316 "supported_io_types": { 00:16:51.316 "read": true, 00:16:51.316 "write": true, 00:16:51.316 "unmap": true, 00:16:51.316 "write_zeroes": true, 00:16:51.316 "flush": false, 00:16:51.316 "reset": true, 00:16:51.316 "compare": false, 00:16:51.316 "compare_and_write": false, 00:16:51.316 "abort": false, 00:16:51.316 "nvme_admin": false, 00:16:51.316 "nvme_io": false 00:16:51.316 }, 00:16:51.316 "driver_specific": { 00:16:51.316 "lvol": { 00:16:51.316 "lvol_store_uuid": "2c407c8a-39be-44e5-8876-8f119937909f", 00:16:51.316 "base_bdev": "nvme0n1", 00:16:51.316 "thin_provision": true, 00:16:51.316 "num_allocated_clusters": 0, 00:16:51.316 "snapshot": false, 00:16:51.316 "clone": false, 00:16:51.316 "esnap_clone": false 00:16:51.316 } 00:16:51.316 } 00:16:51.316 } 00:16:51.316 ]' 00:16:51.316 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:51.316 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:51.316 11:58:49 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:51.316 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:51.316 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:51.316 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:16:51.316 11:58:50 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:51.316 11:58:50 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:51.576 11:58:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:16:51.576 11:58:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # get_bdev_size 9d4d3f09-994a-439d-9365-92f91afe091e 00:16:51.576 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1374 -- # local bdev_name=9d4d3f09-994a-439d-9365-92f91afe091e 00:16:51.576 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1375 -- # local bdev_info 00:16:51.576 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1376 -- # local bs 00:16:51.576 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1377 -- # local nb 00:16:51.576 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d4d3f09-994a-439d-9365-92f91afe091e 00:16:51.576 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:16:51.576 { 00:16:51.576 "name": "9d4d3f09-994a-439d-9365-92f91afe091e", 00:16:51.576 "aliases": [ 00:16:51.576 "lvs/nvme0n1p0" 00:16:51.576 ], 00:16:51.576 "product_name": "Logical Volume", 00:16:51.576 "block_size": 4096, 00:16:51.576 "num_blocks": 26476544, 00:16:51.576 "uuid": "9d4d3f09-994a-439d-9365-92f91afe091e", 00:16:51.576 "assigned_rate_limits": { 00:16:51.576 "rw_ios_per_sec": 0, 00:16:51.576 "rw_mbytes_per_sec": 0, 00:16:51.576 "r_mbytes_per_sec": 0, 00:16:51.576 "w_mbytes_per_sec": 0 00:16:51.576 }, 00:16:51.576 "claimed": false, 00:16:51.576 "zoned": false, 00:16:51.576 "supported_io_types": { 00:16:51.576 "read": true, 00:16:51.576 "write": true, 00:16:51.576 "unmap": true, 00:16:51.576 "write_zeroes": true, 00:16:51.576 "flush": false, 00:16:51.576 "reset": true, 00:16:51.576 "compare": false, 00:16:51.576 "compare_and_write": false, 00:16:51.576 "abort": false, 00:16:51.576 "nvme_admin": false, 00:16:51.576 "nvme_io": false 00:16:51.576 }, 00:16:51.576 "driver_specific": { 00:16:51.576 "lvol": { 00:16:51.576 "lvol_store_uuid": "2c407c8a-39be-44e5-8876-8f119937909f", 00:16:51.576 "base_bdev": "nvme0n1", 00:16:51.576 "thin_provision": true, 00:16:51.576 "num_allocated_clusters": 0, 00:16:51.576 "snapshot": false, 00:16:51.576 "clone": false, 00:16:51.576 "esnap_clone": false 00:16:51.576 } 00:16:51.576 } 00:16:51.576 } 00:16:51.576 ]' 00:16:51.576 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:16:51.836 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # bs=4096 00:16:51.836 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:16:51.836 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # nb=26476544 00:16:51.836 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:16:51.836 11:58:50 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # echo 103424 00:16:51.836 11:58:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:16:51.836 11:58:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9d4d3f09-994a-439d-9365-92f91afe091e -c nvc0n1p0 --l2p_dram_limit 20 00:16:51.836 [2024-07-21 11:58:50.653536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.836 [2024-07-21 11:58:50.653595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:51.836 [2024-07-21 11:58:50.653608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:51.836 [2024-07-21 11:58:50.653618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.836 [2024-07-21 11:58:50.653676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.836 [2024-07-21 11:58:50.653686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:51.836 [2024-07-21 11:58:50.653696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:16:51.836 [2024-07-21 11:58:50.653716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.836 [2024-07-21 11:58:50.653739] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:51.836 [2024-07-21 11:58:50.654059] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:51.836 [2024-07-21 11:58:50.654080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.836 [2024-07-21 11:58:50.654091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:51.836 [2024-07-21 11:58:50.654099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.352 ms 00:16:51.836 [2024-07-21 11:58:50.654108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.836 [2024-07-21 11:58:50.654136] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 6ffb713f-6d69-4f5e-ac36-15e7577725a0 00:16:51.836 [2024-07-21 11:58:50.655541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.836 [2024-07-21 11:58:50.655571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:51.836 [2024-07-21 11:58:50.655591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:51.836 [2024-07-21 11:58:50.655599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.836 [2024-07-21 11:58:50.663005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.836 [2024-07-21 11:58:50.663039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:51.836 [2024-07-21 11:58:50.663051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.354 ms 00:16:51.836 [2024-07-21 11:58:50.663058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.836 [2024-07-21 11:58:50.663144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.836 [2024-07-21 11:58:50.663157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:51.836 [2024-07-21 11:58:50.663167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:16:51.836 [2024-07-21 11:58:50.663178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.836 [2024-07-21 11:58:50.663232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.836 [2024-07-21 11:58:50.663240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:51.836 [2024-07-21 11:58:50.663250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:51.836 [2024-07-21 11:58:50.663256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.836 [2024-07-21 11:58:50.663279] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:51.836 [2024-07-21 11:58:50.664976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.836 [2024-07-21 11:58:50.665016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:51.836 [2024-07-21 11:58:50.665031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.710 ms 00:16:51.836 [2024-07-21 11:58:50.665041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.836 [2024-07-21 11:58:50.665070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.837 [2024-07-21 11:58:50.665080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:51.837 [2024-07-21 11:58:50.665087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:51.837 [2024-07-21 11:58:50.665098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.837 [2024-07-21 11:58:50.665120] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:51.837 [2024-07-21 11:58:50.665244] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:51.837 [2024-07-21 11:58:50.665255] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:51.837 [2024-07-21 11:58:50.665267] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:16:51.837 [2024-07-21 11:58:50.665277] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:51.837 [2024-07-21 11:58:50.665287] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:51.837 [2024-07-21 11:58:50.665294] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:51.837 [2024-07-21 11:58:50.665303] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:51.837 [2024-07-21 11:58:50.665312] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:51.837 [2024-07-21 11:58:50.665321] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:51.837 [2024-07-21 11:58:50.665328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.837 [2024-07-21 11:58:50.665337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:51.837 [2024-07-21 11:58:50.665345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:16:51.837 [2024-07-21 11:58:50.665353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.837 [2024-07-21 11:58:50.665423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.837 [2024-07-21 11:58:50.665437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:51.837 [2024-07-21 11:58:50.665455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:16:51.837 [2024-07-21 11:58:50.665464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.837 [2024-07-21 11:58:50.665539] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:51.837 [2024-07-21 11:58:50.665550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:51.837 [2024-07-21 11:58:50.665558] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:51.837 [2024-07-21 11:58:50.665574] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.837 [2024-07-21 11:58:50.665581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:51.837 [2024-07-21 11:58:50.665590] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:51.837 [2024-07-21 11:58:50.665596] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:51.837 [2024-07-21 11:58:50.665604] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:51.837 [2024-07-21 11:58:50.665611] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:51.837 [2024-07-21 11:58:50.665618] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:51.837 [2024-07-21 11:58:50.665634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:51.837 [2024-07-21 11:58:50.665643] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:51.837 [2024-07-21 11:58:50.665650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:51.837 [2024-07-21 11:58:50.665662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:51.837 [2024-07-21 11:58:50.665669] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:51.837 [2024-07-21 11:58:50.665677] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.837 [2024-07-21 11:58:50.665683] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:51.837 [2024-07-21 11:58:50.665693] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:51.837 [2024-07-21 11:58:50.665699] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.837 [2024-07-21 11:58:50.665707] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:51.837 [2024-07-21 11:58:50.665713] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:51.837 [2024-07-21 11:58:50.665721] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.837 [2024-07-21 11:58:50.665727] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:51.837 [2024-07-21 11:58:50.665735] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:51.837 [2024-07-21 11:58:50.665740] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.837 [2024-07-21 11:58:50.665749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:51.837 [2024-07-21 11:58:50.665755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:51.837 [2024-07-21 11:58:50.665763] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.837 [2024-07-21 11:58:50.665769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:51.837 [2024-07-21 11:58:50.665781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:51.837 [2024-07-21 11:58:50.665787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:51.837 [2024-07-21 11:58:50.665795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:51.837 [2024-07-21 11:58:50.665801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:51.837 [2024-07-21 11:58:50.665809] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:51.837 [2024-07-21 11:58:50.665815] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:51.837 [2024-07-21 11:58:50.665834] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:51.837 [2024-07-21 11:58:50.665840] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:51.837 [2024-07-21 11:58:50.665849] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:51.837 [2024-07-21 11:58:50.665855] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:51.837 [2024-07-21 11:58:50.665863] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.837 [2024-07-21 11:58:50.665869] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:51.837 [2024-07-21 11:58:50.665877] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:51.837 [2024-07-21 11:58:50.665884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.837 [2024-07-21 11:58:50.665893] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:51.837 [2024-07-21 11:58:50.665900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:51.837 [2024-07-21 11:58:50.665913] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:51.837 [2024-07-21 11:58:50.665920] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:51.837 [2024-07-21 11:58:50.665930] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:51.837 [2024-07-21 11:58:50.665937] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:51.837 [2024-07-21 11:58:50.665945] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:51.837 [2024-07-21 11:58:50.665951] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:51.837 [2024-07-21 11:58:50.665959] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:51.837 [2024-07-21 11:58:50.665965] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:51.837 [2024-07-21 11:58:50.665978] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:51.837 [2024-07-21 11:58:50.665993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:51.837 [2024-07-21 11:58:50.666012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:51.837 [2024-07-21 11:58:50.666019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:51.837 [2024-07-21 11:58:50.666028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:51.837 [2024-07-21 11:58:50.666035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:51.837 [2024-07-21 11:58:50.666043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:51.837 [2024-07-21 11:58:50.666049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:51.837 [2024-07-21 11:58:50.666063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:51.837 [2024-07-21 11:58:50.666070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:51.837 [2024-07-21 11:58:50.666079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:51.837 [2024-07-21 11:58:50.666085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:51.837 [2024-07-21 11:58:50.666093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:51.837 [2024-07-21 11:58:50.666100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:51.837 [2024-07-21 11:58:50.666109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:51.837 [2024-07-21 11:58:50.666116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:51.837 [2024-07-21 11:58:50.666124] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:51.837 [2024-07-21 11:58:50.666132] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:51.837 [2024-07-21 11:58:50.666141] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:51.837 [2024-07-21 11:58:50.666148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:51.837 [2024-07-21 11:58:50.666156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:51.837 [2024-07-21 11:58:50.666164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:51.837 [2024-07-21 11:58:50.666173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:51.837 [2024-07-21 11:58:50.666180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:51.837 [2024-07-21 11:58:50.666193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:16:51.837 [2024-07-21 11:58:50.666199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:51.837 [2024-07-21 11:58:50.666234] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:51.837 [2024-07-21 11:58:50.666242] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:56.027 [2024-07-21 11:58:54.314053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.027 [2024-07-21 11:58:54.314167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:56.027 [2024-07-21 11:58:54.314203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3654.843 ms 00:16:56.027 [2024-07-21 11:58:54.314224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.027 [2024-07-21 11:58:54.332793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.027 [2024-07-21 11:58:54.332901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:56.027 [2024-07-21 11:58:54.332947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.508 ms 00:16:56.027 [2024-07-21 11:58:54.332973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.027 [2024-07-21 11:58:54.333139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.027 [2024-07-21 11:58:54.333174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:56.027 [2024-07-21 11:58:54.333202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:16:56.027 [2024-07-21 11:58:54.333222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.027 [2024-07-21 11:58:54.343227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.027 [2024-07-21 11:58:54.343325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:56.027 [2024-07-21 11:58:54.343364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.951 ms 00:16:56.028 [2024-07-21 11:58:54.343390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.343445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.343475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:56.028 [2024-07-21 11:58:54.343502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:56.028 [2024-07-21 11:58:54.343527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.344117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.344171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:56.028 [2024-07-21 11:58:54.344210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.443 ms 00:16:56.028 [2024-07-21 11:58:54.344256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.344399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.344447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:56.028 [2024-07-21 11:58:54.344485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:16:56.028 [2024-07-21 11:58:54.344519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.350269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.350334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:56.028 [2024-07-21 11:58:54.350364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.711 ms 00:16:56.028 [2024-07-21 11:58:54.350383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.357559] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:56.028 [2024-07-21 11:58:54.363315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.363381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:56.028 [2024-07-21 11:58:54.363408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.884 ms 00:16:56.028 [2024-07-21 11:58:54.363430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.441240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.441386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:56.028 [2024-07-21 11:58:54.441421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.925 ms 00:16:56.028 [2024-07-21 11:58:54.441446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.441622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.441663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:56.028 [2024-07-21 11:58:54.441692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:16:56.028 [2024-07-21 11:58:54.441713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.445406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.445481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:56.028 [2024-07-21 11:58:54.445509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.644 ms 00:16:56.028 [2024-07-21 11:58:54.445534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.448374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.448445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:56.028 [2024-07-21 11:58:54.448488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.802 ms 00:16:56.028 [2024-07-21 11:58:54.448517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.448781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.448842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:56.028 [2024-07-21 11:58:54.448878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.223 ms 00:16:56.028 [2024-07-21 11:58:54.448904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.490713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.490813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:56.028 [2024-07-21 11:58:54.490855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.841 ms 00:16:56.028 [2024-07-21 11:58:54.490894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.495412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.495494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:56.028 [2024-07-21 11:58:54.495523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.470 ms 00:16:56.028 [2024-07-21 11:58:54.495544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.499098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.499175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:56.028 [2024-07-21 11:58:54.499207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.508 ms 00:16:56.028 [2024-07-21 11:58:54.499229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.502618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.502688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:56.028 [2024-07-21 11:58:54.502722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.341 ms 00:16:56.028 [2024-07-21 11:58:54.502746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.502802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.502851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:56.028 [2024-07-21 11:58:54.502883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:56.028 [2024-07-21 11:58:54.502904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.502998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:56.028 [2024-07-21 11:58:54.503048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:56.028 [2024-07-21 11:58:54.503075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:16:56.028 [2024-07-21 11:58:54.503087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:56.028 [2024-07-21 11:58:54.504103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3857.571 ms, result 0 00:16:56.028 { 00:16:56.028 "name": "ftl0", 00:16:56.028 "uuid": "6ffb713f-6d69-4f5e-ac36-15e7577725a0" 00:16:56.028 } 00:16:56.028 11:58:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:56.028 11:58:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # jq -r .name 00:16:56.028 11:58:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:16:56.028 11:58:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:56.028 [2024-07-21 11:58:54.800330] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:56.028 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:56.028 Zero copy mechanism will not be used. 00:16:56.028 Running I/O for 4 seconds... 00:17:00.267 00:17:00.267 Latency(us) 00:17:00.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.267 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:00.267 ftl0 : 4.00 1703.70 113.14 0.00 0.00 613.44 215.53 1058.88 00:17:00.267 =================================================================================================================== 00:17:00.267 Total : 1703.70 113.14 0.00 0.00 613.44 215.53 1058.88 00:17:00.267 [2024-07-21 11:58:58.798900] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:00.267 0 00:17:00.267 11:58:58 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:00.267 [2024-07-21 11:58:58.913767] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:00.267 Running I/O for 4 seconds... 00:17:04.459 00:17:04.459 Latency(us) 00:17:04.459 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.459 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.459 ftl0 : 4.01 10471.91 40.91 0.00 0.00 12198.71 257.57 36631.48 00:17:04.459 =================================================================================================================== 00:17:04.459 Total : 10471.91 40.91 0.00 0.00 12198.71 0.00 36631.48 00:17:04.459 [2024-07-21 11:59:02.925609] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:04.459 0 00:17:04.459 11:59:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:04.459 [2024-07-21 11:59:03.039317] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:04.459 Running I/O for 4 seconds... 00:17:08.654 00:17:08.654 Latency(us) 00:17:08.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.654 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:08.654 Verification LBA range: start 0x0 length 0x1400000 00:17:08.654 ftl0 : 4.01 7862.04 30.71 0.00 0.00 16230.30 279.03 36402.53 00:17:08.654 =================================================================================================================== 00:17:08.654 Total : 7862.04 30.71 0.00 0.00 16230.30 0.00 36402.53 00:17:08.654 [2024-07-21 11:59:07.046667] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:08.654 0 00:17:08.654 11:59:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:08.654 [2024-07-21 11:59:07.236389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.654 [2024-07-21 11:59:07.236524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:08.654 [2024-07-21 11:59:07.236557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:08.654 [2024-07-21 11:59:07.236579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.654 [2024-07-21 11:59:07.236617] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:08.654 [2024-07-21 11:59:07.237305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.654 [2024-07-21 11:59:07.237345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:08.654 [2024-07-21 11:59:07.237376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:17:08.654 [2024-07-21 11:59:07.237395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.654 [2024-07-21 11:59:07.239653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.654 [2024-07-21 11:59:07.239752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:08.654 [2024-07-21 11:59:07.239785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.187 ms 00:17:08.654 [2024-07-21 11:59:07.239806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.654 [2024-07-21 11:59:07.446833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.654 [2024-07-21 11:59:07.446980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:08.654 [2024-07-21 11:59:07.447030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 207.355 ms 00:17:08.654 [2024-07-21 11:59:07.447056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.654 [2024-07-21 11:59:07.452177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.654 [2024-07-21 11:59:07.452244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:08.654 [2024-07-21 11:59:07.452289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.067 ms 00:17:08.654 [2024-07-21 11:59:07.452309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.654 [2024-07-21 11:59:07.454159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.654 [2024-07-21 11:59:07.454225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:08.654 [2024-07-21 11:59:07.454259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.773 ms 00:17:08.654 [2024-07-21 11:59:07.454278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.654 [2024-07-21 11:59:07.459361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.654 [2024-07-21 11:59:07.459434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:08.654 [2024-07-21 11:59:07.459465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.038 ms 00:17:08.654 [2024-07-21 11:59:07.459485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.654 [2024-07-21 11:59:07.459622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.654 [2024-07-21 11:59:07.459649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:08.654 [2024-07-21 11:59:07.459672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:17:08.654 [2024-07-21 11:59:07.459699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.654 [2024-07-21 11:59:07.462033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.654 [2024-07-21 11:59:07.462098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:08.654 [2024-07-21 11:59:07.462129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.306 ms 00:17:08.654 [2024-07-21 11:59:07.462149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.654 [2024-07-21 11:59:07.463739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.654 [2024-07-21 11:59:07.463827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:08.654 [2024-07-21 11:59:07.463892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.548 ms 00:17:08.654 [2024-07-21 11:59:07.463912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.654 [2024-07-21 11:59:07.465239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.654 [2024-07-21 11:59:07.465315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:08.654 [2024-07-21 11:59:07.465366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.272 ms 00:17:08.654 [2024-07-21 11:59:07.465387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.654 [2024-07-21 11:59:07.466573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.654 [2024-07-21 11:59:07.466639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:08.654 [2024-07-21 11:59:07.466672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:17:08.654 [2024-07-21 11:59:07.466692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.654 [2024-07-21 11:59:07.466786] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:08.654 [2024-07-21 11:59:07.466846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.466901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.466953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.467971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:08.654 [2024-07-21 11:59:07.468022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.468999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.469998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:08.655 [2024-07-21 11:59:07.470154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:08.656 [2024-07-21 11:59:07.470163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:08.656 [2024-07-21 11:59:07.470177] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:08.656 [2024-07-21 11:59:07.470186] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 6ffb713f-6d69-4f5e-ac36-15e7577725a0 00:17:08.656 [2024-07-21 11:59:07.470193] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:08.656 [2024-07-21 11:59:07.470206] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:08.656 [2024-07-21 11:59:07.470213] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:08.656 [2024-07-21 11:59:07.470222] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:08.656 [2024-07-21 11:59:07.470229] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:08.656 [2024-07-21 11:59:07.470240] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:08.656 [2024-07-21 11:59:07.470247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:08.656 [2024-07-21 11:59:07.470254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:08.656 [2024-07-21 11:59:07.470261] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:08.656 [2024-07-21 11:59:07.470271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.656 [2024-07-21 11:59:07.470279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:08.656 [2024-07-21 11:59:07.470291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.508 ms 00:17:08.656 [2024-07-21 11:59:07.470299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.472012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.656 [2024-07-21 11:59:07.472028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:08.656 [2024-07-21 11:59:07.472038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:17:08.656 [2024-07-21 11:59:07.472045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.472159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.656 [2024-07-21 11:59:07.472169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:08.656 [2024-07-21 11:59:07.472179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:08.656 [2024-07-21 11:59:07.472194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.477836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.656 [2024-07-21 11:59:07.477925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:08.656 [2024-07-21 11:59:07.477955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.656 [2024-07-21 11:59:07.477979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.478043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.656 [2024-07-21 11:59:07.478068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:08.656 [2024-07-21 11:59:07.478089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.656 [2024-07-21 11:59:07.478107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.478189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.656 [2024-07-21 11:59:07.478218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:08.656 [2024-07-21 11:59:07.478256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.656 [2024-07-21 11:59:07.478281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.478321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.656 [2024-07-21 11:59:07.478382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:08.656 [2024-07-21 11:59:07.478419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.656 [2024-07-21 11:59:07.478450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.491633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.656 [2024-07-21 11:59:07.491755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:08.656 [2024-07-21 11:59:07.491788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.656 [2024-07-21 11:59:07.491807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.499906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.656 [2024-07-21 11:59:07.499992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:08.656 [2024-07-21 11:59:07.500024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.656 [2024-07-21 11:59:07.500045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.500129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.656 [2024-07-21 11:59:07.500153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:08.656 [2024-07-21 11:59:07.500175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.656 [2024-07-21 11:59:07.500193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.500294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.656 [2024-07-21 11:59:07.500323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:08.656 [2024-07-21 11:59:07.500359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.656 [2024-07-21 11:59:07.500385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.500482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.656 [2024-07-21 11:59:07.500529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:08.656 [2024-07-21 11:59:07.500557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.656 [2024-07-21 11:59:07.500576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.500638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.656 [2024-07-21 11:59:07.500678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:08.656 [2024-07-21 11:59:07.500710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.656 [2024-07-21 11:59:07.500729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.500797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.656 [2024-07-21 11:59:07.500842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:08.656 [2024-07-21 11:59:07.500881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.656 [2024-07-21 11:59:07.500901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.500972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.656 [2024-07-21 11:59:07.501005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:08.656 [2024-07-21 11:59:07.501034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.656 [2024-07-21 11:59:07.501064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.656 [2024-07-21 11:59:07.501219] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 265.298 ms, result 0 00:17:08.656 true 00:17:08.916 11:59:07 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # killprocess 89261 00:17:08.916 11:59:07 ftl.ftl_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 89261 ']' 00:17:08.916 11:59:07 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # kill -0 89261 00:17:08.916 11:59:07 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # uname 00:17:08.916 11:59:07 ftl.ftl_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:08.916 11:59:07 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89261 00:17:08.916 11:59:07 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:08.916 11:59:07 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:08.916 11:59:07 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89261' 00:17:08.916 killing process with pid 89261 00:17:08.916 11:59:07 ftl.ftl_bdevperf -- common/autotest_common.sh@965 -- # kill 89261 00:17:08.916 Received shutdown signal, test time was about 4.000000 seconds 00:17:08.916 00:17:08.916 Latency(us) 00:17:08.916 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.916 =================================================================================================================== 00:17:08.916 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:08.916 11:59:07 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # wait 89261 00:17:12.250 11:59:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:17:12.250 11:59:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:17:12.250 11:59:10 ftl.ftl_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:12.250 11:59:10 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:12.250 11:59:10 ftl.ftl_bdevperf -- ftl/bdevperf.sh@41 -- # remove_shm 00:17:12.250 11:59:10 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:12.250 Remove shared memory files 00:17:12.250 11:59:10 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:12.250 11:59:10 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:12.250 11:59:10 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:12.250 11:59:10 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:12.250 11:59:10 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:12.250 ************************************ 00:17:12.250 END TEST ftl_bdevperf 00:17:12.250 ************************************ 00:17:12.250 00:17:12.250 real 0m23.841s 00:17:12.250 user 0m26.172s 00:17:12.250 sys 0m1.124s 00:17:12.250 11:59:10 ftl.ftl_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:12.250 11:59:10 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:12.250 11:59:10 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:12.250 11:59:10 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:17:12.250 11:59:10 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:12.250 11:59:10 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:12.250 ************************************ 00:17:12.250 START TEST ftl_trim 00:17:12.250 ************************************ 00:17:12.250 11:59:10 ftl.ftl_trim -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:12.250 * Looking for test storage... 00:17:12.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:12.250 11:59:10 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=89631 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 89631 00:17:12.251 11:59:10 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:12.251 11:59:10 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 89631 ']' 00:17:12.251 11:59:10 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.251 11:59:10 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:12.251 11:59:10 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.251 11:59:10 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:12.251 11:59:10 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:12.251 [2024-07-21 11:59:11.052548] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:12.251 [2024-07-21 11:59:11.052780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89631 ] 00:17:12.511 [2024-07-21 11:59:11.219668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:12.511 [2024-07-21 11:59:11.268995] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:12.511 [2024-07-21 11:59:11.269087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.511 [2024-07-21 11:59:11.269189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.081 11:59:11 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:13.081 11:59:11 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:17:13.081 11:59:11 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:13.081 11:59:11 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:13.081 11:59:11 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:13.081 11:59:11 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:13.081 11:59:11 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:13.081 11:59:11 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:13.339 11:59:12 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:13.339 11:59:12 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:13.339 11:59:12 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:13.339 11:59:12 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:17:13.339 11:59:12 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:13.339 11:59:12 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:17:13.339 11:59:12 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:17:13.339 11:59:12 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:13.598 11:59:12 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:13.598 { 00:17:13.598 "name": "nvme0n1", 00:17:13.598 "aliases": [ 00:17:13.598 "c626e3e5-7451-49e0-a56e-103e07a30782" 00:17:13.598 ], 00:17:13.598 "product_name": "NVMe disk", 00:17:13.598 "block_size": 4096, 00:17:13.598 "num_blocks": 1310720, 00:17:13.598 "uuid": "c626e3e5-7451-49e0-a56e-103e07a30782", 00:17:13.598 "assigned_rate_limits": { 00:17:13.598 "rw_ios_per_sec": 0, 00:17:13.598 "rw_mbytes_per_sec": 0, 00:17:13.598 "r_mbytes_per_sec": 0, 00:17:13.598 "w_mbytes_per_sec": 0 00:17:13.598 }, 00:17:13.598 "claimed": true, 00:17:13.598 "claim_type": "read_many_write_one", 00:17:13.598 "zoned": false, 00:17:13.598 "supported_io_types": { 00:17:13.598 "read": true, 00:17:13.598 "write": true, 00:17:13.598 "unmap": true, 00:17:13.598 "write_zeroes": true, 00:17:13.598 "flush": true, 00:17:13.598 "reset": true, 00:17:13.598 "compare": true, 00:17:13.598 "compare_and_write": false, 00:17:13.598 "abort": true, 00:17:13.598 "nvme_admin": true, 00:17:13.598 "nvme_io": true 00:17:13.598 }, 00:17:13.598 "driver_specific": { 00:17:13.598 "nvme": [ 00:17:13.598 { 00:17:13.598 "pci_address": "0000:00:11.0", 00:17:13.598 "trid": { 00:17:13.598 "trtype": "PCIe", 00:17:13.598 "traddr": "0000:00:11.0" 00:17:13.598 }, 00:17:13.598 "ctrlr_data": { 00:17:13.598 "cntlid": 0, 00:17:13.598 "vendor_id": "0x1b36", 00:17:13.598 "model_number": "QEMU NVMe Ctrl", 00:17:13.598 "serial_number": "12341", 00:17:13.598 "firmware_revision": "8.0.0", 00:17:13.598 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:13.598 "oacs": { 00:17:13.598 "security": 0, 00:17:13.598 "format": 1, 00:17:13.598 "firmware": 0, 00:17:13.598 "ns_manage": 1 00:17:13.598 }, 00:17:13.598 "multi_ctrlr": false, 00:17:13.598 "ana_reporting": false 00:17:13.598 }, 00:17:13.598 "vs": { 00:17:13.598 "nvme_version": "1.4" 00:17:13.598 }, 00:17:13.598 "ns_data": { 00:17:13.598 "id": 1, 00:17:13.598 "can_share": false 00:17:13.598 } 00:17:13.598 } 00:17:13.598 ], 00:17:13.598 "mp_policy": "active_passive" 00:17:13.598 } 00:17:13.598 } 00:17:13.598 ]' 00:17:13.598 11:59:12 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:13.598 11:59:12 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:17:13.598 11:59:12 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:13.598 11:59:12 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=1310720 00:17:13.598 11:59:12 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:17:13.598 11:59:12 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 5120 00:17:13.598 11:59:12 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:13.598 11:59:12 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:13.598 11:59:12 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:13.598 11:59:12 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:13.598 11:59:12 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:13.857 11:59:12 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=2c407c8a-39be-44e5-8876-8f119937909f 00:17:13.857 11:59:12 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:13.857 11:59:12 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2c407c8a-39be-44e5-8876-8f119937909f 00:17:14.116 11:59:12 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:14.116 11:59:12 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=aa013710-5471-4c69-9975-203ea300a289 00:17:14.116 11:59:12 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u aa013710-5471-4c69-9975-203ea300a289 00:17:14.375 11:59:13 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=b08e3a43-a061-409b-be38-37f0b226f30d 00:17:14.375 11:59:13 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b08e3a43-a061-409b-be38-37f0b226f30d 00:17:14.375 11:59:13 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:14.375 11:59:13 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:14.375 11:59:13 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=b08e3a43-a061-409b-be38-37f0b226f30d 00:17:14.375 11:59:13 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:14.375 11:59:13 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size b08e3a43-a061-409b-be38-37f0b226f30d 00:17:14.375 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=b08e3a43-a061-409b-be38-37f0b226f30d 00:17:14.375 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:14.375 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:17:14.375 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:17:14.375 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b08e3a43-a061-409b-be38-37f0b226f30d 00:17:14.633 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:14.633 { 00:17:14.634 "name": "b08e3a43-a061-409b-be38-37f0b226f30d", 00:17:14.634 "aliases": [ 00:17:14.634 "lvs/nvme0n1p0" 00:17:14.634 ], 00:17:14.634 "product_name": "Logical Volume", 00:17:14.634 "block_size": 4096, 00:17:14.634 "num_blocks": 26476544, 00:17:14.634 "uuid": "b08e3a43-a061-409b-be38-37f0b226f30d", 00:17:14.634 "assigned_rate_limits": { 00:17:14.634 "rw_ios_per_sec": 0, 00:17:14.634 "rw_mbytes_per_sec": 0, 00:17:14.634 "r_mbytes_per_sec": 0, 00:17:14.634 "w_mbytes_per_sec": 0 00:17:14.634 }, 00:17:14.634 "claimed": false, 00:17:14.634 "zoned": false, 00:17:14.634 "supported_io_types": { 00:17:14.634 "read": true, 00:17:14.634 "write": true, 00:17:14.634 "unmap": true, 00:17:14.634 "write_zeroes": true, 00:17:14.634 "flush": false, 00:17:14.634 "reset": true, 00:17:14.634 "compare": false, 00:17:14.634 "compare_and_write": false, 00:17:14.634 "abort": false, 00:17:14.634 "nvme_admin": false, 00:17:14.634 "nvme_io": false 00:17:14.634 }, 00:17:14.634 "driver_specific": { 00:17:14.634 "lvol": { 00:17:14.634 "lvol_store_uuid": "aa013710-5471-4c69-9975-203ea300a289", 00:17:14.634 "base_bdev": "nvme0n1", 00:17:14.634 "thin_provision": true, 00:17:14.634 "num_allocated_clusters": 0, 00:17:14.634 "snapshot": false, 00:17:14.634 "clone": false, 00:17:14.634 "esnap_clone": false 00:17:14.634 } 00:17:14.634 } 00:17:14.634 } 00:17:14.634 ]' 00:17:14.634 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:14.634 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:17:14.634 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:14.634 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:14.634 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:14.634 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:17:14.634 11:59:13 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:14.634 11:59:13 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:14.634 11:59:13 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:14.892 11:59:13 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:14.892 11:59:13 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:14.892 11:59:13 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size b08e3a43-a061-409b-be38-37f0b226f30d 00:17:14.892 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=b08e3a43-a061-409b-be38-37f0b226f30d 00:17:14.892 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:14.892 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:17:14.892 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:17:14.892 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b08e3a43-a061-409b-be38-37f0b226f30d 00:17:15.150 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:15.150 { 00:17:15.150 "name": "b08e3a43-a061-409b-be38-37f0b226f30d", 00:17:15.150 "aliases": [ 00:17:15.150 "lvs/nvme0n1p0" 00:17:15.150 ], 00:17:15.150 "product_name": "Logical Volume", 00:17:15.150 "block_size": 4096, 00:17:15.150 "num_blocks": 26476544, 00:17:15.150 "uuid": "b08e3a43-a061-409b-be38-37f0b226f30d", 00:17:15.150 "assigned_rate_limits": { 00:17:15.150 "rw_ios_per_sec": 0, 00:17:15.150 "rw_mbytes_per_sec": 0, 00:17:15.150 "r_mbytes_per_sec": 0, 00:17:15.150 "w_mbytes_per_sec": 0 00:17:15.150 }, 00:17:15.150 "claimed": false, 00:17:15.150 "zoned": false, 00:17:15.150 "supported_io_types": { 00:17:15.150 "read": true, 00:17:15.150 "write": true, 00:17:15.150 "unmap": true, 00:17:15.150 "write_zeroes": true, 00:17:15.150 "flush": false, 00:17:15.150 "reset": true, 00:17:15.150 "compare": false, 00:17:15.150 "compare_and_write": false, 00:17:15.150 "abort": false, 00:17:15.150 "nvme_admin": false, 00:17:15.150 "nvme_io": false 00:17:15.150 }, 00:17:15.150 "driver_specific": { 00:17:15.150 "lvol": { 00:17:15.150 "lvol_store_uuid": "aa013710-5471-4c69-9975-203ea300a289", 00:17:15.150 "base_bdev": "nvme0n1", 00:17:15.150 "thin_provision": true, 00:17:15.150 "num_allocated_clusters": 0, 00:17:15.150 "snapshot": false, 00:17:15.150 "clone": false, 00:17:15.150 "esnap_clone": false 00:17:15.150 } 00:17:15.150 } 00:17:15.150 } 00:17:15.150 ]' 00:17:15.150 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:15.150 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:17:15.150 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:15.150 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:15.150 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:15.150 11:59:13 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:17:15.150 11:59:13 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:15.150 11:59:13 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:15.409 11:59:14 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:15.409 11:59:14 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:15.409 11:59:14 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size b08e3a43-a061-409b-be38-37f0b226f30d 00:17:15.409 11:59:14 ftl.ftl_trim -- common/autotest_common.sh@1374 -- # local bdev_name=b08e3a43-a061-409b-be38-37f0b226f30d 00:17:15.409 11:59:14 ftl.ftl_trim -- common/autotest_common.sh@1375 -- # local bdev_info 00:17:15.409 11:59:14 ftl.ftl_trim -- common/autotest_common.sh@1376 -- # local bs 00:17:15.409 11:59:14 ftl.ftl_trim -- common/autotest_common.sh@1377 -- # local nb 00:17:15.409 11:59:14 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b08e3a43-a061-409b-be38-37f0b226f30d 00:17:15.409 11:59:14 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:17:15.409 { 00:17:15.409 "name": "b08e3a43-a061-409b-be38-37f0b226f30d", 00:17:15.409 "aliases": [ 00:17:15.409 "lvs/nvme0n1p0" 00:17:15.409 ], 00:17:15.409 "product_name": "Logical Volume", 00:17:15.409 "block_size": 4096, 00:17:15.409 "num_blocks": 26476544, 00:17:15.409 "uuid": "b08e3a43-a061-409b-be38-37f0b226f30d", 00:17:15.409 "assigned_rate_limits": { 00:17:15.409 "rw_ios_per_sec": 0, 00:17:15.409 "rw_mbytes_per_sec": 0, 00:17:15.409 "r_mbytes_per_sec": 0, 00:17:15.409 "w_mbytes_per_sec": 0 00:17:15.409 }, 00:17:15.409 "claimed": false, 00:17:15.409 "zoned": false, 00:17:15.409 "supported_io_types": { 00:17:15.409 "read": true, 00:17:15.409 "write": true, 00:17:15.409 "unmap": true, 00:17:15.409 "write_zeroes": true, 00:17:15.409 "flush": false, 00:17:15.409 "reset": true, 00:17:15.409 "compare": false, 00:17:15.409 "compare_and_write": false, 00:17:15.409 "abort": false, 00:17:15.409 "nvme_admin": false, 00:17:15.409 "nvme_io": false 00:17:15.409 }, 00:17:15.409 "driver_specific": { 00:17:15.409 "lvol": { 00:17:15.409 "lvol_store_uuid": "aa013710-5471-4c69-9975-203ea300a289", 00:17:15.409 "base_bdev": "nvme0n1", 00:17:15.409 "thin_provision": true, 00:17:15.409 "num_allocated_clusters": 0, 00:17:15.409 "snapshot": false, 00:17:15.409 "clone": false, 00:17:15.409 "esnap_clone": false 00:17:15.409 } 00:17:15.409 } 00:17:15.409 } 00:17:15.409 ]' 00:17:15.409 11:59:14 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:17:15.668 11:59:14 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # bs=4096 00:17:15.668 11:59:14 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:17:15.668 11:59:14 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # nb=26476544 00:17:15.668 11:59:14 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:17:15.668 11:59:14 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # echo 103424 00:17:15.668 11:59:14 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:15.668 11:59:14 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b08e3a43-a061-409b-be38-37f0b226f30d -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:15.668 [2024-07-21 11:59:14.515247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.668 [2024-07-21 11:59:14.515297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:15.668 [2024-07-21 11:59:14.515311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:15.668 [2024-07-21 11:59:14.515335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.668 [2024-07-21 11:59:14.517766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.668 [2024-07-21 11:59:14.517803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:15.668 [2024-07-21 11:59:14.517830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.380 ms 00:17:15.668 [2024-07-21 11:59:14.517839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.668 [2024-07-21 11:59:14.517968] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:15.668 [2024-07-21 11:59:14.518193] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:15.668 [2024-07-21 11:59:14.518214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.668 [2024-07-21 11:59:14.518222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:15.668 [2024-07-21 11:59:14.518233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:15.668 [2024-07-21 11:59:14.518244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.668 [2024-07-21 11:59:14.518419] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 97f51f8a-8f80-4c71-8dcb-fbdd0567b8b7 00:17:15.668 [2024-07-21 11:59:14.519865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.668 [2024-07-21 11:59:14.519898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:15.668 [2024-07-21 11:59:14.519908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:17:15.668 [2024-07-21 11:59:14.519934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.668 [2024-07-21 11:59:14.527400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.668 [2024-07-21 11:59:14.527463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:15.668 [2024-07-21 11:59:14.527473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.330 ms 00:17:15.668 [2024-07-21 11:59:14.527511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.668 [2024-07-21 11:59:14.527683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.668 [2024-07-21 11:59:14.527699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:15.668 [2024-07-21 11:59:14.527707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:17:15.668 [2024-07-21 11:59:14.527717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.668 [2024-07-21 11:59:14.527804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.668 [2024-07-21 11:59:14.527842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:15.668 [2024-07-21 11:59:14.527850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:15.668 [2024-07-21 11:59:14.527859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.668 [2024-07-21 11:59:14.527946] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:15.668 [2024-07-21 11:59:14.529655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.668 [2024-07-21 11:59:14.529681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:15.668 [2024-07-21 11:59:14.529692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.717 ms 00:17:15.668 [2024-07-21 11:59:14.529702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.668 [2024-07-21 11:59:14.529807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.668 [2024-07-21 11:59:14.529815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:15.668 [2024-07-21 11:59:14.529837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:15.668 [2024-07-21 11:59:14.529860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.668 [2024-07-21 11:59:14.529933] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:15.668 [2024-07-21 11:59:14.530086] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:15.668 [2024-07-21 11:59:14.530101] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:15.668 [2024-07-21 11:59:14.530111] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:15.668 [2024-07-21 11:59:14.530122] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:15.668 [2024-07-21 11:59:14.530131] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:15.668 [2024-07-21 11:59:14.530140] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:15.669 [2024-07-21 11:59:14.530150] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:15.669 [2024-07-21 11:59:14.530159] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:15.669 [2024-07-21 11:59:14.530166] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:15.669 [2024-07-21 11:59:14.530178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.669 [2024-07-21 11:59:14.530187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:15.669 [2024-07-21 11:59:14.530196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:17:15.669 [2024-07-21 11:59:14.530204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.669 [2024-07-21 11:59:14.530316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.669 [2024-07-21 11:59:14.530330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:15.669 [2024-07-21 11:59:14.530341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:15.669 [2024-07-21 11:59:14.530348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.669 [2024-07-21 11:59:14.530511] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:15.669 [2024-07-21 11:59:14.530520] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:15.669 [2024-07-21 11:59:14.530533] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:15.669 [2024-07-21 11:59:14.530540] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.669 [2024-07-21 11:59:14.530550] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:15.669 [2024-07-21 11:59:14.530556] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:15.669 [2024-07-21 11:59:14.530564] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:15.669 [2024-07-21 11:59:14.530571] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:15.669 [2024-07-21 11:59:14.530579] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:15.669 [2024-07-21 11:59:14.530585] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:15.669 [2024-07-21 11:59:14.530592] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:15.669 [2024-07-21 11:59:14.530599] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:15.669 [2024-07-21 11:59:14.530607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:15.669 [2024-07-21 11:59:14.530613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:15.669 [2024-07-21 11:59:14.530623] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:15.669 [2024-07-21 11:59:14.530629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.669 [2024-07-21 11:59:14.530638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:15.669 [2024-07-21 11:59:14.530646] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:15.669 [2024-07-21 11:59:14.530654] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.669 [2024-07-21 11:59:14.530660] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:15.669 [2024-07-21 11:59:14.530668] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:15.669 [2024-07-21 11:59:14.530674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.669 [2024-07-21 11:59:14.530682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:15.669 [2024-07-21 11:59:14.530688] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:15.669 [2024-07-21 11:59:14.530696] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.669 [2024-07-21 11:59:14.530702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:15.669 [2024-07-21 11:59:14.530710] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:15.669 [2024-07-21 11:59:14.530716] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.669 [2024-07-21 11:59:14.530725] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:15.669 [2024-07-21 11:59:14.530731] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:15.669 [2024-07-21 11:59:14.530741] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:15.669 [2024-07-21 11:59:14.530747] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:15.669 [2024-07-21 11:59:14.530755] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:15.669 [2024-07-21 11:59:14.530762] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:15.669 [2024-07-21 11:59:14.530770] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:15.669 [2024-07-21 11:59:14.530777] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:15.669 [2024-07-21 11:59:14.530786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:15.669 [2024-07-21 11:59:14.530792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:15.669 [2024-07-21 11:59:14.530800] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:15.669 [2024-07-21 11:59:14.530806] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.669 [2024-07-21 11:59:14.530813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:15.669 [2024-07-21 11:59:14.530839] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:15.669 [2024-07-21 11:59:14.530848] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.669 [2024-07-21 11:59:14.530854] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:15.669 [2024-07-21 11:59:14.530863] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:15.669 [2024-07-21 11:59:14.530870] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:15.669 [2024-07-21 11:59:14.530881] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:15.669 [2024-07-21 11:59:14.530888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:15.669 [2024-07-21 11:59:14.530896] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:15.669 [2024-07-21 11:59:14.530903] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:15.669 [2024-07-21 11:59:14.530911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:15.669 [2024-07-21 11:59:14.530918] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:15.669 [2024-07-21 11:59:14.530926] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:15.669 [2024-07-21 11:59:14.530937] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:15.669 [2024-07-21 11:59:14.530948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:15.669 [2024-07-21 11:59:14.530956] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:15.669 [2024-07-21 11:59:14.530966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:15.669 [2024-07-21 11:59:14.530973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:15.669 [2024-07-21 11:59:14.530981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:15.669 [2024-07-21 11:59:14.530989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:15.669 [2024-07-21 11:59:14.530997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:15.669 [2024-07-21 11:59:14.531004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:15.669 [2024-07-21 11:59:14.531014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:15.669 [2024-07-21 11:59:14.531021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:15.669 [2024-07-21 11:59:14.531030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:15.669 [2024-07-21 11:59:14.531036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:15.669 [2024-07-21 11:59:14.531045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:15.669 [2024-07-21 11:59:14.531052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:15.669 [2024-07-21 11:59:14.531061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:15.669 [2024-07-21 11:59:14.531068] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:15.669 [2024-07-21 11:59:14.531077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:15.669 [2024-07-21 11:59:14.531086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:15.669 [2024-07-21 11:59:14.531151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:15.669 [2024-07-21 11:59:14.531159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:15.669 [2024-07-21 11:59:14.531168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:15.669 [2024-07-21 11:59:14.531176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.669 [2024-07-21 11:59:14.531185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:15.669 [2024-07-21 11:59:14.531193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:17:15.669 [2024-07-21 11:59:14.531204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.928 [2024-07-21 11:59:14.531390] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:15.928 [2024-07-21 11:59:14.531406] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:20.119 [2024-07-21 11:59:18.350233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.119 [2024-07-21 11:59:18.350382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:20.120 [2024-07-21 11:59:18.350433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3826.204 ms 00:17:20.120 [2024-07-21 11:59:18.350460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.361540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.361719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.120 [2024-07-21 11:59:18.361754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.934 ms 00:17:20.120 [2024-07-21 11:59:18.361795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.362021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.362085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:20.120 [2024-07-21 11:59:18.362115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:20.120 [2024-07-21 11:59:18.362149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.382721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.382844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.120 [2024-07-21 11:59:18.382894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.505 ms 00:17:20.120 [2024-07-21 11:59:18.382918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.383065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.383115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.120 [2024-07-21 11:59:18.383149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:17:20.120 [2024-07-21 11:59:18.383179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.383664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.383708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.120 [2024-07-21 11:59:18.383736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:17:20.120 [2024-07-21 11:59:18.383765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.383970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.384015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.120 [2024-07-21 11:59:18.384043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:20.120 [2024-07-21 11:59:18.384076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.391432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.391525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.120 [2024-07-21 11:59:18.391565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.276 ms 00:17:20.120 [2024-07-21 11:59:18.391601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.399666] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:20.120 [2024-07-21 11:59:18.415878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.415994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:20.120 [2024-07-21 11:59:18.416042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.107 ms 00:17:20.120 [2024-07-21 11:59:18.416062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.505607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.505758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:20.120 [2024-07-21 11:59:18.505792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.548 ms 00:17:20.120 [2024-07-21 11:59:18.505850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.506116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.506162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:20.120 [2024-07-21 11:59:18.506193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:17:20.120 [2024-07-21 11:59:18.506221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.510246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.510318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:20.120 [2024-07-21 11:59:18.510354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.923 ms 00:17:20.120 [2024-07-21 11:59:18.510382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.513507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.513572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:20.120 [2024-07-21 11:59:18.513628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.009 ms 00:17:20.120 [2024-07-21 11:59:18.513647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.514021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.514069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:20.120 [2024-07-21 11:59:18.514102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:17:20.120 [2024-07-21 11:59:18.514133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.559699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.559795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:20.120 [2024-07-21 11:59:18.559865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.541 ms 00:17:20.120 [2024-07-21 11:59:18.559914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.564400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.564476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:20.120 [2024-07-21 11:59:18.564506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.378 ms 00:17:20.120 [2024-07-21 11:59:18.564526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.567918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.567986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:20.120 [2024-07-21 11:59:18.568016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.294 ms 00:17:20.120 [2024-07-21 11:59:18.568036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.571996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.572065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:20.120 [2024-07-21 11:59:18.572119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.801 ms 00:17:20.120 [2024-07-21 11:59:18.572149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.572275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.572326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:20.120 [2024-07-21 11:59:18.572358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:20.120 [2024-07-21 11:59:18.572389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.572543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.120 [2024-07-21 11:59:18.572575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:20.120 [2024-07-21 11:59:18.572605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:20.120 [2024-07-21 11:59:18.572634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.120 [2024-07-21 11:59:18.573712] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:20.120 [2024-07-21 11:59:18.574766] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4066.101 ms, result 0 00:17:20.120 [2024-07-21 11:59:18.575969] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.120 { 00:17:20.120 "name": "ftl0", 00:17:20.120 "uuid": "97f51f8a-8f80-4c71-8dcb-fbdd0567b8b7" 00:17:20.120 } 00:17:20.120 11:59:18 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:20.120 11:59:18 ftl.ftl_trim -- common/autotest_common.sh@895 -- # local bdev_name=ftl0 00:17:20.120 11:59:18 ftl.ftl_trim -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:17:20.120 11:59:18 ftl.ftl_trim -- common/autotest_common.sh@897 -- # local i 00:17:20.120 11:59:18 ftl.ftl_trim -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:17:20.120 11:59:18 ftl.ftl_trim -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:17:20.120 11:59:18 ftl.ftl_trim -- common/autotest_common.sh@900 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:20.120 11:59:18 ftl.ftl_trim -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:20.120 [ 00:17:20.120 { 00:17:20.120 "name": "ftl0", 00:17:20.120 "aliases": [ 00:17:20.120 "97f51f8a-8f80-4c71-8dcb-fbdd0567b8b7" 00:17:20.120 ], 00:17:20.120 "product_name": "FTL disk", 00:17:20.120 "block_size": 4096, 00:17:20.120 "num_blocks": 23592960, 00:17:20.120 "uuid": "97f51f8a-8f80-4c71-8dcb-fbdd0567b8b7", 00:17:20.120 "assigned_rate_limits": { 00:17:20.120 "rw_ios_per_sec": 0, 00:17:20.120 "rw_mbytes_per_sec": 0, 00:17:20.120 "r_mbytes_per_sec": 0, 00:17:20.120 "w_mbytes_per_sec": 0 00:17:20.120 }, 00:17:20.120 "claimed": false, 00:17:20.120 "zoned": false, 00:17:20.120 "supported_io_types": { 00:17:20.120 "read": true, 00:17:20.120 "write": true, 00:17:20.120 "unmap": true, 00:17:20.120 "write_zeroes": true, 00:17:20.120 "flush": true, 00:17:20.120 "reset": false, 00:17:20.120 "compare": false, 00:17:20.120 "compare_and_write": false, 00:17:20.120 "abort": false, 00:17:20.120 "nvme_admin": false, 00:17:20.120 "nvme_io": false 00:17:20.120 }, 00:17:20.120 "driver_specific": { 00:17:20.120 "ftl": { 00:17:20.120 "base_bdev": "b08e3a43-a061-409b-be38-37f0b226f30d", 00:17:20.120 "cache": "nvc0n1p0" 00:17:20.120 } 00:17:20.120 } 00:17:20.120 } 00:17:20.120 ] 00:17:20.120 11:59:18 ftl.ftl_trim -- common/autotest_common.sh@903 -- # return 0 00:17:20.120 11:59:18 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:20.120 11:59:18 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:20.380 11:59:19 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:20.380 11:59:19 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:20.642 11:59:19 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:20.642 { 00:17:20.642 "name": "ftl0", 00:17:20.642 "aliases": [ 00:17:20.642 "97f51f8a-8f80-4c71-8dcb-fbdd0567b8b7" 00:17:20.642 ], 00:17:20.642 "product_name": "FTL disk", 00:17:20.642 "block_size": 4096, 00:17:20.642 "num_blocks": 23592960, 00:17:20.642 "uuid": "97f51f8a-8f80-4c71-8dcb-fbdd0567b8b7", 00:17:20.642 "assigned_rate_limits": { 00:17:20.642 "rw_ios_per_sec": 0, 00:17:20.642 "rw_mbytes_per_sec": 0, 00:17:20.642 "r_mbytes_per_sec": 0, 00:17:20.642 "w_mbytes_per_sec": 0 00:17:20.642 }, 00:17:20.642 "claimed": false, 00:17:20.642 "zoned": false, 00:17:20.642 "supported_io_types": { 00:17:20.642 "read": true, 00:17:20.642 "write": true, 00:17:20.642 "unmap": true, 00:17:20.642 "write_zeroes": true, 00:17:20.642 "flush": true, 00:17:20.642 "reset": false, 00:17:20.642 "compare": false, 00:17:20.642 "compare_and_write": false, 00:17:20.642 "abort": false, 00:17:20.642 "nvme_admin": false, 00:17:20.642 "nvme_io": false 00:17:20.642 }, 00:17:20.642 "driver_specific": { 00:17:20.642 "ftl": { 00:17:20.642 "base_bdev": "b08e3a43-a061-409b-be38-37f0b226f30d", 00:17:20.642 "cache": "nvc0n1p0" 00:17:20.642 } 00:17:20.642 } 00:17:20.642 } 00:17:20.642 ]' 00:17:20.642 11:59:19 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:20.642 11:59:19 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:20.642 11:59:19 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:20.642 [2024-07-21 11:59:19.490871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.642 [2024-07-21 11:59:19.490925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:20.642 [2024-07-21 11:59:19.490939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:20.642 [2024-07-21 11:59:19.490967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.642 [2024-07-21 11:59:19.491048] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:20.642 [2024-07-21 11:59:19.491774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.642 [2024-07-21 11:59:19.491789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:20.642 [2024-07-21 11:59:19.491800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:17:20.642 [2024-07-21 11:59:19.491808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.642 [2024-07-21 11:59:19.492953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.642 [2024-07-21 11:59:19.492976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:20.642 [2024-07-21 11:59:19.492987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:17:20.642 [2024-07-21 11:59:19.492994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.642 [2024-07-21 11:59:19.495833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.642 [2024-07-21 11:59:19.495851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:20.642 [2024-07-21 11:59:19.495861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.757 ms 00:17:20.642 [2024-07-21 11:59:19.495869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.642 [2024-07-21 11:59:19.501446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.642 [2024-07-21 11:59:19.501475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:20.642 [2024-07-21 11:59:19.501489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.496 ms 00:17:20.642 [2024-07-21 11:59:19.501496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.642 [2024-07-21 11:59:19.503273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.642 [2024-07-21 11:59:19.503311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:20.642 [2024-07-21 11:59:19.503322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.580 ms 00:17:20.642 [2024-07-21 11:59:19.503330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.903 [2024-07-21 11:59:19.508263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.903 [2024-07-21 11:59:19.508297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:20.903 [2024-07-21 11:59:19.508341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.861 ms 00:17:20.903 [2024-07-21 11:59:19.508349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.903 [2024-07-21 11:59:19.508720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.903 [2024-07-21 11:59:19.508730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:20.903 [2024-07-21 11:59:19.508740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:17:20.903 [2024-07-21 11:59:19.508747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.903 [2024-07-21 11:59:19.510720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.903 [2024-07-21 11:59:19.510751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:20.903 [2024-07-21 11:59:19.510762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.924 ms 00:17:20.903 [2024-07-21 11:59:19.510768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.903 [2024-07-21 11:59:19.512304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.903 [2024-07-21 11:59:19.512334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:20.903 [2024-07-21 11:59:19.512345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.458 ms 00:17:20.903 [2024-07-21 11:59:19.512352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.903 [2024-07-21 11:59:19.513531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.903 [2024-07-21 11:59:19.513559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:20.903 [2024-07-21 11:59:19.513571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:17:20.903 [2024-07-21 11:59:19.513578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.903 [2024-07-21 11:59:19.515008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.903 [2024-07-21 11:59:19.515072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:20.903 [2024-07-21 11:59:19.515122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.224 ms 00:17:20.903 [2024-07-21 11:59:19.515142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.903 [2024-07-21 11:59:19.515253] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:20.903 [2024-07-21 11:59:19.515282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:20.903 [2024-07-21 11:59:19.515760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.515995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:20.904 [2024-07-21 11:59:19.516387] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:20.904 [2024-07-21 11:59:19.516396] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f51f8a-8f80-4c71-8dcb-fbdd0567b8b7 00:17:20.904 [2024-07-21 11:59:19.516403] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:20.904 [2024-07-21 11:59:19.516411] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:20.904 [2024-07-21 11:59:19.516419] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:20.904 [2024-07-21 11:59:19.516430] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:20.904 [2024-07-21 11:59:19.516437] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:20.904 [2024-07-21 11:59:19.516445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:20.904 [2024-07-21 11:59:19.516452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:20.904 [2024-07-21 11:59:19.516460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:20.904 [2024-07-21 11:59:19.516466] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:20.904 [2024-07-21 11:59:19.516476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.904 [2024-07-21 11:59:19.516483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:20.904 [2024-07-21 11:59:19.516507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:17:20.904 [2024-07-21 11:59:19.516515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.904 [2024-07-21 11:59:19.518614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.904 [2024-07-21 11:59:19.518665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:20.904 [2024-07-21 11:59:19.518697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.012 ms 00:17:20.904 [2024-07-21 11:59:19.518725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.904 [2024-07-21 11:59:19.518931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.904 [2024-07-21 11:59:19.518967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:20.904 [2024-07-21 11:59:19.518997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:17:20.904 [2024-07-21 11:59:19.519025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.904 [2024-07-21 11:59:19.525522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.904 [2024-07-21 11:59:19.525584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:20.904 [2024-07-21 11:59:19.525628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.904 [2024-07-21 11:59:19.525648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.904 [2024-07-21 11:59:19.525781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.904 [2024-07-21 11:59:19.525814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:20.904 [2024-07-21 11:59:19.525857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.904 [2024-07-21 11:59:19.525885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.905 [2024-07-21 11:59:19.526021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.905 [2024-07-21 11:59:19.526060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:20.905 [2024-07-21 11:59:19.526094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.905 [2024-07-21 11:59:19.526124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.905 [2024-07-21 11:59:19.526218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.905 [2024-07-21 11:59:19.526249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:20.905 [2024-07-21 11:59:19.526279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.905 [2024-07-21 11:59:19.526307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.905 [2024-07-21 11:59:19.540190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.905 [2024-07-21 11:59:19.540333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:20.905 [2024-07-21 11:59:19.540395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.905 [2024-07-21 11:59:19.540436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.905 [2024-07-21 11:59:19.548696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.905 [2024-07-21 11:59:19.548791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:20.905 [2024-07-21 11:59:19.548851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.905 [2024-07-21 11:59:19.548892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.905 [2024-07-21 11:59:19.549011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.905 [2024-07-21 11:59:19.549045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:20.905 [2024-07-21 11:59:19.549081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.905 [2024-07-21 11:59:19.549111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.905 [2024-07-21 11:59:19.549246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.905 [2024-07-21 11:59:19.549276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:20.905 [2024-07-21 11:59:19.549324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.905 [2024-07-21 11:59:19.549353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.905 [2024-07-21 11:59:19.549517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.905 [2024-07-21 11:59:19.549555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:20.905 [2024-07-21 11:59:19.549587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.905 [2024-07-21 11:59:19.549616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.905 [2024-07-21 11:59:19.549750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.905 [2024-07-21 11:59:19.549788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:20.905 [2024-07-21 11:59:19.549829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.905 [2024-07-21 11:59:19.549876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.905 [2024-07-21 11:59:19.549983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.905 [2024-07-21 11:59:19.550015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:20.905 [2024-07-21 11:59:19.550045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.905 [2024-07-21 11:59:19.550074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.905 [2024-07-21 11:59:19.550196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:20.905 [2024-07-21 11:59:19.550229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:20.905 [2024-07-21 11:59:19.550259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:20.905 [2024-07-21 11:59:19.550287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.905 [2024-07-21 11:59:19.550650] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 59.873 ms, result 0 00:17:20.905 true 00:17:20.905 11:59:19 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 89631 00:17:20.905 11:59:19 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89631 ']' 00:17:20.905 11:59:19 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89631 00:17:20.905 11:59:19 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:17:20.905 11:59:19 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:20.905 11:59:19 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89631 00:17:20.905 11:59:19 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:20.905 11:59:19 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:20.905 11:59:19 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89631' 00:17:20.905 killing process with pid 89631 00:17:20.905 11:59:19 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 89631 00:17:20.905 11:59:19 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 89631 00:17:26.205 11:59:24 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:26.465 65536+0 records in 00:17:26.465 65536+0 records out 00:17:26.465 268435456 bytes (268 MB, 256 MiB) copied, 0.79983 s, 336 MB/s 00:17:26.465 11:59:25 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:26.465 [2024-07-21 11:59:25.253785] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:26.465 [2024-07-21 11:59:25.253929] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89846 ] 00:17:26.725 [2024-07-21 11:59:25.412276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.725 [2024-07-21 11:59:25.462191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.725 [2024-07-21 11:59:25.562290] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.725 [2024-07-21 11:59:25.562361] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:26.987 [2024-07-21 11:59:25.709020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.987 [2024-07-21 11:59:25.709081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:26.987 [2024-07-21 11:59:25.709101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:26.987 [2024-07-21 11:59:25.709109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.987 [2024-07-21 11:59:25.711176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.987 [2024-07-21 11:59:25.711212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:26.987 [2024-07-21 11:59:25.711222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:17:26.987 [2024-07-21 11:59:25.711230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.987 [2024-07-21 11:59:25.711301] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:26.987 [2024-07-21 11:59:25.711515] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:26.987 [2024-07-21 11:59:25.711533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.987 [2024-07-21 11:59:25.711540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:26.987 [2024-07-21 11:59:25.711549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:17:26.987 [2024-07-21 11:59:25.711558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.987 [2024-07-21 11:59:25.713047] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:26.987 [2024-07-21 11:59:25.715531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.987 [2024-07-21 11:59:25.715572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:26.987 [2024-07-21 11:59:25.715583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.490 ms 00:17:26.987 [2024-07-21 11:59:25.715589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.987 [2024-07-21 11:59:25.715663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.987 [2024-07-21 11:59:25.715673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:26.987 [2024-07-21 11:59:25.715682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:26.987 [2024-07-21 11:59:25.715691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.987 [2024-07-21 11:59:25.722516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.987 [2024-07-21 11:59:25.722550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:26.987 [2024-07-21 11:59:25.722558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.784 ms 00:17:26.988 [2024-07-21 11:59:25.722588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.988 [2024-07-21 11:59:25.722690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.988 [2024-07-21 11:59:25.722701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:26.988 [2024-07-21 11:59:25.722709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:26.988 [2024-07-21 11:59:25.722726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.988 [2024-07-21 11:59:25.722758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.988 [2024-07-21 11:59:25.722768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:26.988 [2024-07-21 11:59:25.722783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:26.988 [2024-07-21 11:59:25.722796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.988 [2024-07-21 11:59:25.722818] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:26.988 [2024-07-21 11:59:25.724449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.988 [2024-07-21 11:59:25.724474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:26.988 [2024-07-21 11:59:25.724486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:17:26.988 [2024-07-21 11:59:25.724493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.988 [2024-07-21 11:59:25.724554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.988 [2024-07-21 11:59:25.724563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:26.988 [2024-07-21 11:59:25.724571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:26.988 [2024-07-21 11:59:25.724578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.988 [2024-07-21 11:59:25.724595] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:26.988 [2024-07-21 11:59:25.724625] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:26.988 [2024-07-21 11:59:25.724659] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:26.988 [2024-07-21 11:59:25.724674] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:26.988 [2024-07-21 11:59:25.724749] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:26.988 [2024-07-21 11:59:25.724766] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:26.988 [2024-07-21 11:59:25.724775] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:26.988 [2024-07-21 11:59:25.724785] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:26.988 [2024-07-21 11:59:25.724793] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:26.988 [2024-07-21 11:59:25.724807] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:26.988 [2024-07-21 11:59:25.724814] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:26.988 [2024-07-21 11:59:25.724821] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:26.988 [2024-07-21 11:59:25.724838] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:26.988 [2024-07-21 11:59:25.724846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.988 [2024-07-21 11:59:25.724853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:26.988 [2024-07-21 11:59:25.724860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:17:26.988 [2024-07-21 11:59:25.724867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.988 [2024-07-21 11:59:25.724938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.988 [2024-07-21 11:59:25.724946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:26.988 [2024-07-21 11:59:25.724953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:26.988 [2024-07-21 11:59:25.724960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.988 [2024-07-21 11:59:25.725039] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:26.988 [2024-07-21 11:59:25.725049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:26.988 [2024-07-21 11:59:25.725055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.988 [2024-07-21 11:59:25.725062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.988 [2024-07-21 11:59:25.725069] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:26.988 [2024-07-21 11:59:25.725075] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:26.988 [2024-07-21 11:59:25.725082] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:26.988 [2024-07-21 11:59:25.725087] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:26.988 [2024-07-21 11:59:25.725094] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:26.988 [2024-07-21 11:59:25.725101] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.988 [2024-07-21 11:59:25.725116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:26.988 [2024-07-21 11:59:25.725122] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:26.988 [2024-07-21 11:59:25.725131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:26.988 [2024-07-21 11:59:25.725137] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:26.988 [2024-07-21 11:59:25.725144] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:26.988 [2024-07-21 11:59:25.725149] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.988 [2024-07-21 11:59:25.725155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:26.988 [2024-07-21 11:59:25.725161] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:26.988 [2024-07-21 11:59:25.725167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.988 [2024-07-21 11:59:25.725173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:26.988 [2024-07-21 11:59:25.725179] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:26.988 [2024-07-21 11:59:25.725201] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.988 [2024-07-21 11:59:25.725207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:26.988 [2024-07-21 11:59:25.725213] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:26.988 [2024-07-21 11:59:25.725219] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.988 [2024-07-21 11:59:25.725226] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:26.988 [2024-07-21 11:59:25.725233] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:26.988 [2024-07-21 11:59:25.725238] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.988 [2024-07-21 11:59:25.725250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:26.988 [2024-07-21 11:59:25.725257] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:26.988 [2024-07-21 11:59:25.725262] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:26.988 [2024-07-21 11:59:25.725268] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:26.988 [2024-07-21 11:59:25.725274] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:26.988 [2024-07-21 11:59:25.725280] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.988 [2024-07-21 11:59:25.725286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:26.988 [2024-07-21 11:59:25.725292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:26.988 [2024-07-21 11:59:25.725298] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:26.988 [2024-07-21 11:59:25.725304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:26.988 [2024-07-21 11:59:25.725311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:26.988 [2024-07-21 11:59:25.725317] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.988 [2024-07-21 11:59:25.725323] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:26.988 [2024-07-21 11:59:25.725330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:26.988 [2024-07-21 11:59:25.725337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.988 [2024-07-21 11:59:25.725342] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:26.988 [2024-07-21 11:59:25.725352] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:26.988 [2024-07-21 11:59:25.725366] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:26.988 [2024-07-21 11:59:25.725373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:26.988 [2024-07-21 11:59:25.725380] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:26.988 [2024-07-21 11:59:25.725386] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:26.988 [2024-07-21 11:59:25.725393] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:26.988 [2024-07-21 11:59:25.725399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:26.988 [2024-07-21 11:59:25.725406] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:26.988 [2024-07-21 11:59:25.725412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:26.988 [2024-07-21 11:59:25.725419] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:26.988 [2024-07-21 11:59:25.725428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.988 [2024-07-21 11:59:25.725439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:26.988 [2024-07-21 11:59:25.725447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:26.988 [2024-07-21 11:59:25.725453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:26.988 [2024-07-21 11:59:25.725460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:26.988 [2024-07-21 11:59:25.725467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:26.988 [2024-07-21 11:59:25.725475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:26.988 [2024-07-21 11:59:25.725482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:26.988 [2024-07-21 11:59:25.725488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:26.988 [2024-07-21 11:59:25.725494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:26.988 [2024-07-21 11:59:25.725500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:26.988 [2024-07-21 11:59:25.725507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:26.988 [2024-07-21 11:59:25.725513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:26.988 [2024-07-21 11:59:25.725520] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:26.989 [2024-07-21 11:59:25.725526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:26.989 [2024-07-21 11:59:25.725533] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:26.989 [2024-07-21 11:59:25.725540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:26.989 [2024-07-21 11:59:25.725547] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:26.989 [2024-07-21 11:59:25.725554] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:26.989 [2024-07-21 11:59:25.725560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:26.989 [2024-07-21 11:59:25.725567] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:26.989 [2024-07-21 11:59:25.725574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.725583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:26.989 [2024-07-21 11:59:25.725590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:17:26.989 [2024-07-21 11:59:25.725597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.748310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.748361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:26.989 [2024-07-21 11:59:25.748376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.706 ms 00:17:26.989 [2024-07-21 11:59:25.748392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.748552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.748565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:26.989 [2024-07-21 11:59:25.748575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:26.989 [2024-07-21 11:59:25.748597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.758743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.758789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:26.989 [2024-07-21 11:59:25.758802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.138 ms 00:17:26.989 [2024-07-21 11:59:25.758825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.758912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.758922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:26.989 [2024-07-21 11:59:25.758930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:26.989 [2024-07-21 11:59:25.758936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.759380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.759400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:26.989 [2024-07-21 11:59:25.759408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:17:26.989 [2024-07-21 11:59:25.759418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.759528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.759541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:26.989 [2024-07-21 11:59:25.759548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:26.989 [2024-07-21 11:59:25.759555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.765707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.765741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:26.989 [2024-07-21 11:59:25.765751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.142 ms 00:17:26.989 [2024-07-21 11:59:25.765758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.768473] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:26.989 [2024-07-21 11:59:25.768509] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:26.989 [2024-07-21 11:59:25.768520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.768555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:26.989 [2024-07-21 11:59:25.768563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.639 ms 00:17:26.989 [2024-07-21 11:59:25.768570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.780743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.780781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:26.989 [2024-07-21 11:59:25.780791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.151 ms 00:17:26.989 [2024-07-21 11:59:25.780819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.782706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.782737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:26.989 [2024-07-21 11:59:25.782746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.805 ms 00:17:26.989 [2024-07-21 11:59:25.782753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.784269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.784311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:26.989 [2024-07-21 11:59:25.784320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:17:26.989 [2024-07-21 11:59:25.784326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.784605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.784621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:26.989 [2024-07-21 11:59:25.784630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:17:26.989 [2024-07-21 11:59:25.784637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.805057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.805124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:26.989 [2024-07-21 11:59:25.805138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.423 ms 00:17:26.989 [2024-07-21 11:59:25.805175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.811251] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:26.989 [2024-07-21 11:59:25.827487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.827565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:26.989 [2024-07-21 11:59:25.827578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.273 ms 00:17:26.989 [2024-07-21 11:59:25.827602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.827717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.827727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:26.989 [2024-07-21 11:59:25.827738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:26.989 [2024-07-21 11:59:25.827745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.827801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.827809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:26.989 [2024-07-21 11:59:25.827816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:26.989 [2024-07-21 11:59:25.827823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.827865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.827874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:26.989 [2024-07-21 11:59:25.827882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:26.989 [2024-07-21 11:59:25.827908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.827940] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:26.989 [2024-07-21 11:59:25.827965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.827972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:26.989 [2024-07-21 11:59:25.827979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:26.989 [2024-07-21 11:59:25.827986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.831790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.831838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:26.989 [2024-07-21 11:59:25.831865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.794 ms 00:17:26.989 [2024-07-21 11:59:25.831873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.831962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:26.989 [2024-07-21 11:59:25.831973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:26.989 [2024-07-21 11:59:25.831981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:26.989 [2024-07-21 11:59:25.831989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:26.989 [2024-07-21 11:59:25.832907] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:26.989 [2024-07-21 11:59:25.833918] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 123.797 ms, result 0 00:17:26.989 [2024-07-21 11:59:25.834668] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:26.989 [2024-07-21 11:59:25.844326] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:36.882  Copying: 25/256 [MB] (25 MBps) Copying: 51/256 [MB] (25 MBps) Copying: 77/256 [MB] (25 MBps) Copying: 103/256 [MB] (26 MBps) Copying: 129/256 [MB] (25 MBps) Copying: 155/256 [MB] (25 MBps) Copying: 181/256 [MB] (25 MBps) Copying: 206/256 [MB] (25 MBps) Copying: 233/256 [MB] (26 MBps) Copying: 256/256 [MB] (average 25 MBps)[2024-07-21 11:59:35.697327] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:36.882 [2024-07-21 11:59:35.698699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.882 [2024-07-21 11:59:35.698729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:36.882 [2024-07-21 11:59:35.698748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:36.883 [2024-07-21 11:59:35.698757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-07-21 11:59:35.698776] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:36.883 [2024-07-21 11:59:35.699459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-07-21 11:59:35.699480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:36.883 [2024-07-21 11:59:35.699489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:17:36.883 [2024-07-21 11:59:35.699496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-07-21 11:59:35.701456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-07-21 11:59:35.701493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:36.883 [2024-07-21 11:59:35.701509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.932 ms 00:17:36.883 [2024-07-21 11:59:35.701515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-07-21 11:59:35.707956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-07-21 11:59:35.707990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:36.883 [2024-07-21 11:59:35.708000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.425 ms 00:17:36.883 [2024-07-21 11:59:35.708020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-07-21 11:59:35.713466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-07-21 11:59:35.713497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:36.883 [2024-07-21 11:59:35.713507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.421 ms 00:17:36.883 [2024-07-21 11:59:35.713518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-07-21 11:59:35.714875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-07-21 11:59:35.714909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:36.883 [2024-07-21 11:59:35.714918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.317 ms 00:17:36.883 [2024-07-21 11:59:35.714925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-07-21 11:59:35.719161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-07-21 11:59:35.719207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:36.883 [2024-07-21 11:59:35.719216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.218 ms 00:17:36.883 [2024-07-21 11:59:35.719223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-07-21 11:59:35.719326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-07-21 11:59:35.719343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:36.883 [2024-07-21 11:59:35.719354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:36.883 [2024-07-21 11:59:35.719371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-07-21 11:59:35.721547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-07-21 11:59:35.721581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:36.883 [2024-07-21 11:59:35.721590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.158 ms 00:17:36.883 [2024-07-21 11:59:35.721596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-07-21 11:59:35.723050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-07-21 11:59:35.723082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:36.883 [2024-07-21 11:59:35.723090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.431 ms 00:17:36.883 [2024-07-21 11:59:35.723096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-07-21 11:59:35.724129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-07-21 11:59:35.724158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:36.883 [2024-07-21 11:59:35.724167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.003 ms 00:17:36.883 [2024-07-21 11:59:35.724173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-07-21 11:59:35.725335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-07-21 11:59:35.725364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:36.883 [2024-07-21 11:59:35.725372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:17:36.883 [2024-07-21 11:59:35.725378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-07-21 11:59:35.725402] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:36.883 [2024-07-21 11:59:35.725414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:36.883 [2024-07-21 11:59:35.725799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.725997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:36.884 [2024-07-21 11:59:35.726125] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:36.884 [2024-07-21 11:59:35.726132] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f51f8a-8f80-4c71-8dcb-fbdd0567b8b7 00:17:36.884 [2024-07-21 11:59:35.726139] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:36.884 [2024-07-21 11:59:35.726158] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:36.884 [2024-07-21 11:59:35.726164] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:36.884 [2024-07-21 11:59:35.726178] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:36.884 [2024-07-21 11:59:35.726185] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:36.884 [2024-07-21 11:59:35.726196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:36.884 [2024-07-21 11:59:35.726202] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:36.884 [2024-07-21 11:59:35.726208] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:36.884 [2024-07-21 11:59:35.726222] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:36.884 [2024-07-21 11:59:35.726229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.884 [2024-07-21 11:59:35.726236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:36.884 [2024-07-21 11:59:35.726243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:17:36.884 [2024-07-21 11:59:35.726254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.884 [2024-07-21 11:59:35.727993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.884 [2024-07-21 11:59:35.728011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:36.884 [2024-07-21 11:59:35.728019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.726 ms 00:17:36.884 [2024-07-21 11:59:35.728030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.884 [2024-07-21 11:59:35.728132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.884 [2024-07-21 11:59:35.728139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:36.884 [2024-07-21 11:59:35.728147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:36.884 [2024-07-21 11:59:35.728153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.884 [2024-07-21 11:59:35.733748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.884 [2024-07-21 11:59:35.733801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:36.884 [2024-07-21 11:59:35.733845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.884 [2024-07-21 11:59:35.733864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.884 [2024-07-21 11:59:35.733938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.884 [2024-07-21 11:59:35.733961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:36.884 [2024-07-21 11:59:35.733985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.884 [2024-07-21 11:59:35.734041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.884 [2024-07-21 11:59:35.734109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.884 [2024-07-21 11:59:35.734164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:36.884 [2024-07-21 11:59:35.734192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.884 [2024-07-21 11:59:35.734211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.884 [2024-07-21 11:59:35.734243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:36.884 [2024-07-21 11:59:35.734282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:36.884 [2024-07-21 11:59:35.734317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:36.884 [2024-07-21 11:59:35.734360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.144 [2024-07-21 11:59:35.746773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.144 [2024-07-21 11:59:35.746909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:37.144 [2024-07-21 11:59:35.746944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.144 [2024-07-21 11:59:35.747000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.144 [2024-07-21 11:59:35.755044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.144 [2024-07-21 11:59:35.755156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:37.144 [2024-07-21 11:59:35.755188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.144 [2024-07-21 11:59:35.755208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.144 [2024-07-21 11:59:35.755249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.144 [2024-07-21 11:59:35.755268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:37.144 [2024-07-21 11:59:35.755288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.144 [2024-07-21 11:59:35.755306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.144 [2024-07-21 11:59:35.755359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.144 [2024-07-21 11:59:35.755381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:37.144 [2024-07-21 11:59:35.755465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.144 [2024-07-21 11:59:35.755512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.144 [2024-07-21 11:59:35.755600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.144 [2024-07-21 11:59:35.755636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:37.144 [2024-07-21 11:59:35.755667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.144 [2024-07-21 11:59:35.755686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.144 [2024-07-21 11:59:35.755755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.144 [2024-07-21 11:59:35.755801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:37.144 [2024-07-21 11:59:35.755843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.144 [2024-07-21 11:59:35.755880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.144 [2024-07-21 11:59:35.755946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.144 [2024-07-21 11:59:35.755956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:37.144 [2024-07-21 11:59:35.755964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.144 [2024-07-21 11:59:35.755971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.144 [2024-07-21 11:59:35.756028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:37.144 [2024-07-21 11:59:35.756037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:37.144 [2024-07-21 11:59:35.756045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:37.144 [2024-07-21 11:59:35.756052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.144 [2024-07-21 11:59:35.756179] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.566 ms, result 0 00:17:37.403 00:17:37.403 00:17:37.662 11:59:36 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:37.662 11:59:36 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=89961 00:17:37.662 11:59:36 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 89961 00:17:37.662 11:59:36 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 89961 ']' 00:17:37.662 11:59:36 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.662 11:59:36 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:37.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.662 11:59:36 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.662 11:59:36 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:37.662 11:59:36 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:37.662 [2024-07-21 11:59:36.362881] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:37.662 [2024-07-21 11:59:36.362985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89961 ] 00:17:37.662 [2024-07-21 11:59:36.520376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.922 [2024-07-21 11:59:36.563477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.491 11:59:37 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:38.491 11:59:37 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:17:38.491 11:59:37 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:38.491 [2024-07-21 11:59:37.342511] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:38.491 [2024-07-21 11:59:37.342572] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:38.751 [2024-07-21 11:59:37.505134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.751 [2024-07-21 11:59:37.505182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:38.751 [2024-07-21 11:59:37.505196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:38.751 [2024-07-21 11:59:37.505219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.751 [2024-07-21 11:59:37.507181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.751 [2024-07-21 11:59:37.507222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:38.751 [2024-07-21 11:59:37.507236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.947 ms 00:17:38.751 [2024-07-21 11:59:37.507243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.751 [2024-07-21 11:59:37.507332] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:38.751 [2024-07-21 11:59:37.507532] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:38.751 [2024-07-21 11:59:37.507546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.751 [2024-07-21 11:59:37.507554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:38.751 [2024-07-21 11:59:37.507563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:17:38.751 [2024-07-21 11:59:37.507570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.751 [2024-07-21 11:59:37.509040] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:38.751 [2024-07-21 11:59:37.511410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.751 [2024-07-21 11:59:37.511447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:38.752 [2024-07-21 11:59:37.511457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.378 ms 00:17:38.752 [2024-07-21 11:59:37.511466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.752 [2024-07-21 11:59:37.511529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.752 [2024-07-21 11:59:37.511541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:38.752 [2024-07-21 11:59:37.511549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:38.752 [2024-07-21 11:59:37.511561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.752 [2024-07-21 11:59:37.518127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.752 [2024-07-21 11:59:37.518170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:38.752 [2024-07-21 11:59:37.518179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.537 ms 00:17:38.752 [2024-07-21 11:59:37.518204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.752 [2024-07-21 11:59:37.518291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.752 [2024-07-21 11:59:37.518304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:38.752 [2024-07-21 11:59:37.518313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:38.752 [2024-07-21 11:59:37.518324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.752 [2024-07-21 11:59:37.518353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.752 [2024-07-21 11:59:37.518362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:38.752 [2024-07-21 11:59:37.518370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:38.752 [2024-07-21 11:59:37.518377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.752 [2024-07-21 11:59:37.518401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:38.752 [2024-07-21 11:59:37.519998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.752 [2024-07-21 11:59:37.520031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:38.752 [2024-07-21 11:59:37.520043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:17:38.752 [2024-07-21 11:59:37.520062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.752 [2024-07-21 11:59:37.520107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.752 [2024-07-21 11:59:37.520115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:38.752 [2024-07-21 11:59:37.520124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:38.752 [2024-07-21 11:59:37.520131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.752 [2024-07-21 11:59:37.520159] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:38.752 [2024-07-21 11:59:37.520178] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:38.752 [2024-07-21 11:59:37.520219] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:38.752 [2024-07-21 11:59:37.520235] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:38.752 [2024-07-21 11:59:37.520325] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:38.752 [2024-07-21 11:59:37.520338] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:38.752 [2024-07-21 11:59:37.520348] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:38.752 [2024-07-21 11:59:37.520357] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:38.752 [2024-07-21 11:59:37.520366] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:38.752 [2024-07-21 11:59:37.520373] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:38.752 [2024-07-21 11:59:37.520383] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:38.752 [2024-07-21 11:59:37.520390] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:38.752 [2024-07-21 11:59:37.520398] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:38.752 [2024-07-21 11:59:37.520407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.752 [2024-07-21 11:59:37.520416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:38.752 [2024-07-21 11:59:37.520423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:17:38.752 [2024-07-21 11:59:37.520431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.752 [2024-07-21 11:59:37.520499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.752 [2024-07-21 11:59:37.520509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:38.752 [2024-07-21 11:59:37.520516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:38.752 [2024-07-21 11:59:37.520523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.752 [2024-07-21 11:59:37.520600] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:38.752 [2024-07-21 11:59:37.520613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:38.752 [2024-07-21 11:59:37.520620] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:38.752 [2024-07-21 11:59:37.520629] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.752 [2024-07-21 11:59:37.520636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:38.752 [2024-07-21 11:59:37.520645] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:38.752 [2024-07-21 11:59:37.520651] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:38.752 [2024-07-21 11:59:37.520659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:38.752 [2024-07-21 11:59:37.520666] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:38.752 [2024-07-21 11:59:37.520674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:38.752 [2024-07-21 11:59:37.520680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:38.752 [2024-07-21 11:59:37.520687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:38.752 [2024-07-21 11:59:37.520694] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:38.752 [2024-07-21 11:59:37.520701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:38.752 [2024-07-21 11:59:37.520708] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:38.752 [2024-07-21 11:59:37.520715] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.752 [2024-07-21 11:59:37.520721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:38.752 [2024-07-21 11:59:37.520728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:38.752 [2024-07-21 11:59:37.520734] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.752 [2024-07-21 11:59:37.520751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:38.752 [2024-07-21 11:59:37.520758] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:38.752 [2024-07-21 11:59:37.520767] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.752 [2024-07-21 11:59:37.520773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:38.752 [2024-07-21 11:59:37.520781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:38.752 [2024-07-21 11:59:37.520787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.752 [2024-07-21 11:59:37.520795] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:38.752 [2024-07-21 11:59:37.520801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:38.752 [2024-07-21 11:59:37.520808] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.752 [2024-07-21 11:59:37.520814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:38.752 [2024-07-21 11:59:37.520821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:38.752 [2024-07-21 11:59:37.520857] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:38.752 [2024-07-21 11:59:37.520866] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:38.752 [2024-07-21 11:59:37.520872] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:38.752 [2024-07-21 11:59:37.520879] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:38.752 [2024-07-21 11:59:37.520886] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:38.752 [2024-07-21 11:59:37.520894] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:38.752 [2024-07-21 11:59:37.520900] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:38.752 [2024-07-21 11:59:37.520908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:38.752 [2024-07-21 11:59:37.520914] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:38.752 [2024-07-21 11:59:37.520922] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.752 [2024-07-21 11:59:37.520944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:38.752 [2024-07-21 11:59:37.520952] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:38.752 [2024-07-21 11:59:37.520959] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.752 [2024-07-21 11:59:37.520967] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:38.752 [2024-07-21 11:59:37.520975] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:38.752 [2024-07-21 11:59:37.520983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:38.752 [2024-07-21 11:59:37.520989] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:38.752 [2024-07-21 11:59:37.521004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:38.752 [2024-07-21 11:59:37.521011] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:38.752 [2024-07-21 11:59:37.521019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:38.752 [2024-07-21 11:59:37.521026] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:38.752 [2024-07-21 11:59:37.521034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:38.752 [2024-07-21 11:59:37.521041] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:38.752 [2024-07-21 11:59:37.521080] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:38.752 [2024-07-21 11:59:37.521089] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:38.752 [2024-07-21 11:59:37.521109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:38.752 [2024-07-21 11:59:37.521116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:38.752 [2024-07-21 11:59:37.521124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:38.752 [2024-07-21 11:59:37.521131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:38.752 [2024-07-21 11:59:37.521138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:38.752 [2024-07-21 11:59:37.521145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:38.753 [2024-07-21 11:59:37.521153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:38.753 [2024-07-21 11:59:37.521160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:38.753 [2024-07-21 11:59:37.521168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:38.753 [2024-07-21 11:59:37.521175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:38.753 [2024-07-21 11:59:37.521183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:38.753 [2024-07-21 11:59:37.521189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:38.753 [2024-07-21 11:59:37.521197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:38.753 [2024-07-21 11:59:37.521203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:38.753 [2024-07-21 11:59:37.521214] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:38.753 [2024-07-21 11:59:37.521225] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:38.753 [2024-07-21 11:59:37.521233] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:38.753 [2024-07-21 11:59:37.521240] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:38.753 [2024-07-21 11:59:37.521248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:38.753 [2024-07-21 11:59:37.521254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:38.753 [2024-07-21 11:59:37.521263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.753 [2024-07-21 11:59:37.521271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:38.753 [2024-07-21 11:59:37.521286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:17:38.753 [2024-07-21 11:59:37.521293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.753 [2024-07-21 11:59:37.532857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.753 [2024-07-21 11:59:37.532909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:38.753 [2024-07-21 11:59:37.532923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.527 ms 00:17:38.753 [2024-07-21 11:59:37.532936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.753 [2024-07-21 11:59:37.533050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.753 [2024-07-21 11:59:37.533062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:38.753 [2024-07-21 11:59:37.533073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:38.753 [2024-07-21 11:59:37.533080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.753 [2024-07-21 11:59:37.542788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.753 [2024-07-21 11:59:37.542828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:38.753 [2024-07-21 11:59:37.542857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.706 ms 00:17:38.753 [2024-07-21 11:59:37.542864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.753 [2024-07-21 11:59:37.542933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.753 [2024-07-21 11:59:37.542942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:38.753 [2024-07-21 11:59:37.542951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:38.753 [2024-07-21 11:59:37.542958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.753 [2024-07-21 11:59:37.543404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.753 [2024-07-21 11:59:37.543422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:38.753 [2024-07-21 11:59:37.543432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:17:38.753 [2024-07-21 11:59:37.543439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.753 [2024-07-21 11:59:37.543542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.753 [2024-07-21 11:59:37.543555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:38.753 [2024-07-21 11:59:37.543566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:17:38.753 [2024-07-21 11:59:37.543574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.753 [2024-07-21 11:59:37.550201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.753 [2024-07-21 11:59:37.550231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:38.753 [2024-07-21 11:59:37.550242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.617 ms 00:17:38.753 [2024-07-21 11:59:37.550265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.753 [2024-07-21 11:59:37.552779] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:38.753 [2024-07-21 11:59:37.552815] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:38.753 [2024-07-21 11:59:37.552865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.753 [2024-07-21 11:59:37.552873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:38.753 [2024-07-21 11:59:37.552883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.511 ms 00:17:38.753 [2024-07-21 11:59:37.552890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.753 [2024-07-21 11:59:37.564945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.753 [2024-07-21 11:59:37.564983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:38.753 [2024-07-21 11:59:37.564996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.024 ms 00:17:38.753 [2024-07-21 11:59:37.565019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.753 [2024-07-21 11:59:37.566770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.753 [2024-07-21 11:59:37.566803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:38.753 [2024-07-21 11:59:37.566815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.696 ms 00:17:38.753 [2024-07-21 11:59:37.566835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.753 [2024-07-21 11:59:37.568357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.753 [2024-07-21 11:59:37.568388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:38.753 [2024-07-21 11:59:37.568399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.482 ms 00:17:38.753 [2024-07-21 11:59:37.568406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.753 [2024-07-21 11:59:37.568670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.753 [2024-07-21 11:59:37.568686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:38.753 [2024-07-21 11:59:37.568696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.200 ms 00:17:38.753 [2024-07-21 11:59:37.568713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.753 [2024-07-21 11:59:37.596650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.753 [2024-07-21 11:59:37.596714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:38.753 [2024-07-21 11:59:37.596746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.958 ms 00:17:38.753 [2024-07-21 11:59:37.596754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.753 [2024-07-21 11:59:37.602909] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:39.013 [2024-07-21 11:59:37.618565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.013 [2024-07-21 11:59:37.618634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:39.013 [2024-07-21 11:59:37.618646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.750 ms 00:17:39.013 [2024-07-21 11:59:37.618655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.013 [2024-07-21 11:59:37.618748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.013 [2024-07-21 11:59:37.618760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:39.013 [2024-07-21 11:59:37.618771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:39.013 [2024-07-21 11:59:37.618780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.013 [2024-07-21 11:59:37.618851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.013 [2024-07-21 11:59:37.618863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:39.013 [2024-07-21 11:59:37.618871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:39.013 [2024-07-21 11:59:37.618879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.013 [2024-07-21 11:59:37.618901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.013 [2024-07-21 11:59:37.618910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:39.013 [2024-07-21 11:59:37.618917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:39.013 [2024-07-21 11:59:37.618932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.013 [2024-07-21 11:59:37.618963] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:39.013 [2024-07-21 11:59:37.618974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.013 [2024-07-21 11:59:37.618980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:39.013 [2024-07-21 11:59:37.618989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:39.013 [2024-07-21 11:59:37.618996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.013 [2024-07-21 11:59:37.622699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.013 [2024-07-21 11:59:37.622733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:39.013 [2024-07-21 11:59:37.622764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.685 ms 00:17:39.013 [2024-07-21 11:59:37.622781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.013 [2024-07-21 11:59:37.622884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.013 [2024-07-21 11:59:37.622895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:39.013 [2024-07-21 11:59:37.622908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:39.013 [2024-07-21 11:59:37.622916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.013 [2024-07-21 11:59:37.623844] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:39.013 [2024-07-21 11:59:37.624780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 118.638 ms, result 0 00:17:39.013 [2024-07-21 11:59:37.625756] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:39.013 Some configs were skipped because the RPC state that can call them passed over. 00:17:39.013 11:59:37 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:39.013 [2024-07-21 11:59:37.832234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.013 [2024-07-21 11:59:37.832398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:39.013 [2024-07-21 11:59:37.832455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.457 ms 00:17:39.013 [2024-07-21 11:59:37.832483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.013 [2024-07-21 11:59:37.832543] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.771 ms, result 0 00:17:39.013 true 00:17:39.013 11:59:37 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:39.272 [2024-07-21 11:59:38.015869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.272 [2024-07-21 11:59:38.016001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:39.272 [2024-07-21 11:59:38.016036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:17:39.272 [2024-07-21 11:59:38.016057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.272 [2024-07-21 11:59:38.016118] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.486 ms, result 0 00:17:39.272 true 00:17:39.272 11:59:38 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 89961 00:17:39.272 11:59:38 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 89961 ']' 00:17:39.272 11:59:38 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 89961 00:17:39.272 11:59:38 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:17:39.272 11:59:38 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:39.272 11:59:38 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 89961 00:17:39.272 killing process with pid 89961 00:17:39.272 11:59:38 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:39.272 11:59:38 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:39.272 11:59:38 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 89961' 00:17:39.272 11:59:38 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 89961 00:17:39.272 11:59:38 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 89961 00:17:39.532 [2024-07-21 11:59:38.203270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.532 [2024-07-21 11:59:38.203336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:39.532 [2024-07-21 11:59:38.203349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:39.532 [2024-07-21 11:59:38.203358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.532 [2024-07-21 11:59:38.203382] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:39.532 [2024-07-21 11:59:38.204035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.532 [2024-07-21 11:59:38.204044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:39.532 [2024-07-21 11:59:38.204053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.639 ms 00:17:39.532 [2024-07-21 11:59:38.204062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.532 [2024-07-21 11:59:38.204296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.532 [2024-07-21 11:59:38.204305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:39.532 [2024-07-21 11:59:38.204314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:17:39.532 [2024-07-21 11:59:38.204320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.532 [2024-07-21 11:59:38.207591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.532 [2024-07-21 11:59:38.207627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:39.532 [2024-07-21 11:59:38.207652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.255 ms 00:17:39.532 [2024-07-21 11:59:38.207660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.532 [2024-07-21 11:59:38.213174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.532 [2024-07-21 11:59:38.213208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:39.532 [2024-07-21 11:59:38.213219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.472 ms 00:17:39.532 [2024-07-21 11:59:38.213226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.532 [2024-07-21 11:59:38.214714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.532 [2024-07-21 11:59:38.214749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:39.532 [2024-07-21 11:59:38.214760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.411 ms 00:17:39.532 [2024-07-21 11:59:38.214767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.532 [2024-07-21 11:59:38.218713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.532 [2024-07-21 11:59:38.218746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:39.532 [2024-07-21 11:59:38.218757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.921 ms 00:17:39.533 [2024-07-21 11:59:38.218764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.533 [2024-07-21 11:59:38.218875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.533 [2024-07-21 11:59:38.218885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:39.533 [2024-07-21 11:59:38.218894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:39.533 [2024-07-21 11:59:38.218901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.533 [2024-07-21 11:59:38.220955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.533 [2024-07-21 11:59:38.220988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:39.533 [2024-07-21 11:59:38.220999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.036 ms 00:17:39.533 [2024-07-21 11:59:38.221006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.533 [2024-07-21 11:59:38.222511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.533 [2024-07-21 11:59:38.222541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:39.533 [2024-07-21 11:59:38.222551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.471 ms 00:17:39.533 [2024-07-21 11:59:38.222558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.533 [2024-07-21 11:59:38.223730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.533 [2024-07-21 11:59:38.223760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:39.533 [2024-07-21 11:59:38.223770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:17:39.533 [2024-07-21 11:59:38.223777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.533 [2024-07-21 11:59:38.224972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.533 [2024-07-21 11:59:38.225001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:39.533 [2024-07-21 11:59:38.225012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.131 ms 00:17:39.533 [2024-07-21 11:59:38.225018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.533 [2024-07-21 11:59:38.225047] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:39.533 [2024-07-21 11:59:38.225061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:39.533 [2024-07-21 11:59:38.225687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:39.534 [2024-07-21 11:59:38.225914] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:39.534 [2024-07-21 11:59:38.225923] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f51f8a-8f80-4c71-8dcb-fbdd0567b8b7 00:17:39.534 [2024-07-21 11:59:38.225930] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:39.534 [2024-07-21 11:59:38.225938] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:39.534 [2024-07-21 11:59:38.225947] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:39.534 [2024-07-21 11:59:38.225956] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:39.534 [2024-07-21 11:59:38.225962] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:39.534 [2024-07-21 11:59:38.225970] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:39.534 [2024-07-21 11:59:38.225977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:39.534 [2024-07-21 11:59:38.225985] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:39.534 [2024-07-21 11:59:38.225991] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:39.534 [2024-07-21 11:59:38.226000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.534 [2024-07-21 11:59:38.226008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:39.534 [2024-07-21 11:59:38.226017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:17:39.534 [2024-07-21 11:59:38.226023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.227799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.534 [2024-07-21 11:59:38.227820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:39.534 [2024-07-21 11:59:38.227830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.733 ms 00:17:39.534 [2024-07-21 11:59:38.227837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.227967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.534 [2024-07-21 11:59:38.227978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:39.534 [2024-07-21 11:59:38.227987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:39.534 [2024-07-21 11:59:38.227994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.234236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.534 [2024-07-21 11:59:38.234259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:39.534 [2024-07-21 11:59:38.234271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.534 [2024-07-21 11:59:38.234280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.234356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.534 [2024-07-21 11:59:38.234367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:39.534 [2024-07-21 11:59:38.234375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.534 [2024-07-21 11:59:38.234382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.234434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.534 [2024-07-21 11:59:38.234445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:39.534 [2024-07-21 11:59:38.234454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.534 [2024-07-21 11:59:38.234461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.234479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.534 [2024-07-21 11:59:38.234487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:39.534 [2024-07-21 11:59:38.234495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.534 [2024-07-21 11:59:38.234502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.248187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.534 [2024-07-21 11:59:38.248316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:39.534 [2024-07-21 11:59:38.248347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.534 [2024-07-21 11:59:38.248366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.256378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.534 [2024-07-21 11:59:38.256473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:39.534 [2024-07-21 11:59:38.256504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.534 [2024-07-21 11:59:38.256524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.256589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.534 [2024-07-21 11:59:38.256611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:39.534 [2024-07-21 11:59:38.256635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.534 [2024-07-21 11:59:38.256653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.256695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.534 [2024-07-21 11:59:38.256733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:39.534 [2024-07-21 11:59:38.256753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.534 [2024-07-21 11:59:38.256771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.256881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.534 [2024-07-21 11:59:38.256920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:39.534 [2024-07-21 11:59:38.256951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.534 [2024-07-21 11:59:38.256973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.257039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.534 [2024-07-21 11:59:38.257084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:39.534 [2024-07-21 11:59:38.257120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.534 [2024-07-21 11:59:38.257139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.257216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.534 [2024-07-21 11:59:38.257245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.534 [2024-07-21 11:59:38.257288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.534 [2024-07-21 11:59:38.257310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.257369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:39.534 [2024-07-21 11:59:38.257400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.534 [2024-07-21 11:59:38.257430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:39.534 [2024-07-21 11:59:38.257449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.534 [2024-07-21 11:59:38.257596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.408 ms, result 0 00:17:39.793 11:59:38 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:39.793 11:59:38 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:39.793 [2024-07-21 11:59:38.583536] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:39.793 [2024-07-21 11:59:38.583721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90001 ] 00:17:40.051 [2024-07-21 11:59:38.743120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.051 [2024-07-21 11:59:38.786514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.051 [2024-07-21 11:59:38.886007] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.051 [2024-07-21 11:59:38.886169] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.314 [2024-07-21 11:59:39.032483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.314 [2024-07-21 11:59:39.032614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:40.314 [2024-07-21 11:59:39.032647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:17:40.314 [2024-07-21 11:59:39.032666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.314 [2024-07-21 11:59:39.034672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.314 [2024-07-21 11:59:39.034747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.314 [2024-07-21 11:59:39.034773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.979 ms 00:17:40.314 [2024-07-21 11:59:39.034800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.314 [2024-07-21 11:59:39.034887] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:40.314 [2024-07-21 11:59:39.035188] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:40.314 [2024-07-21 11:59:39.035248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.314 [2024-07-21 11:59:39.035277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.314 [2024-07-21 11:59:39.035298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.368 ms 00:17:40.314 [2024-07-21 11:59:39.035309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.314 [2024-07-21 11:59:39.036777] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:40.314 [2024-07-21 11:59:39.039211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.314 [2024-07-21 11:59:39.039278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:40.314 [2024-07-21 11:59:39.039305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.440 ms 00:17:40.314 [2024-07-21 11:59:39.039337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.314 [2024-07-21 11:59:39.039408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.314 [2024-07-21 11:59:39.039433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:40.314 [2024-07-21 11:59:39.039478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:40.314 [2024-07-21 11:59:39.039500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.314 [2024-07-21 11:59:39.046014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.314 [2024-07-21 11:59:39.046075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.314 [2024-07-21 11:59:39.046099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.456 ms 00:17:40.314 [2024-07-21 11:59:39.046116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.314 [2024-07-21 11:59:39.046242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.314 [2024-07-21 11:59:39.046271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.314 [2024-07-21 11:59:39.046290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:40.314 [2024-07-21 11:59:39.046351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.314 [2024-07-21 11:59:39.046397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.314 [2024-07-21 11:59:39.046422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:40.314 [2024-07-21 11:59:39.046493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:40.314 [2024-07-21 11:59:39.046517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.314 [2024-07-21 11:59:39.046552] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:40.314 [2024-07-21 11:59:39.048174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.314 [2024-07-21 11:59:39.048230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.314 [2024-07-21 11:59:39.048263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.632 ms 00:17:40.314 [2024-07-21 11:59:39.048283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.314 [2024-07-21 11:59:39.048332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.314 [2024-07-21 11:59:39.048341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:40.314 [2024-07-21 11:59:39.048349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:40.314 [2024-07-21 11:59:39.048355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.314 [2024-07-21 11:59:39.048373] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:40.314 [2024-07-21 11:59:39.048391] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:40.314 [2024-07-21 11:59:39.048425] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:40.314 [2024-07-21 11:59:39.048441] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:40.314 [2024-07-21 11:59:39.048518] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:40.314 [2024-07-21 11:59:39.048528] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:40.314 [2024-07-21 11:59:39.048537] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:40.314 [2024-07-21 11:59:39.048557] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:40.314 [2024-07-21 11:59:39.048565] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:40.314 [2024-07-21 11:59:39.048572] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:40.314 [2024-07-21 11:59:39.048580] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:40.314 [2024-07-21 11:59:39.048586] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:40.314 [2024-07-21 11:59:39.048595] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:40.314 [2024-07-21 11:59:39.048603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.314 [2024-07-21 11:59:39.048610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:40.314 [2024-07-21 11:59:39.048617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.233 ms 00:17:40.314 [2024-07-21 11:59:39.048624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.314 [2024-07-21 11:59:39.048701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.314 [2024-07-21 11:59:39.048709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:40.314 [2024-07-21 11:59:39.048723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:40.314 [2024-07-21 11:59:39.048730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.314 [2024-07-21 11:59:39.048843] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:40.314 [2024-07-21 11:59:39.048853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:40.314 [2024-07-21 11:59:39.048862] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.314 [2024-07-21 11:59:39.048869] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.314 [2024-07-21 11:59:39.048877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:40.314 [2024-07-21 11:59:39.048883] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:40.314 [2024-07-21 11:59:39.048889] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:40.314 [2024-07-21 11:59:39.048895] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:40.314 [2024-07-21 11:59:39.048902] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:40.314 [2024-07-21 11:59:39.048908] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.314 [2024-07-21 11:59:39.048924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:40.314 [2024-07-21 11:59:39.048932] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:40.314 [2024-07-21 11:59:39.048942] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.314 [2024-07-21 11:59:39.048948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:40.314 [2024-07-21 11:59:39.048955] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:40.314 [2024-07-21 11:59:39.048961] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.314 [2024-07-21 11:59:39.048967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:40.314 [2024-07-21 11:59:39.048973] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:40.314 [2024-07-21 11:59:39.048979] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.314 [2024-07-21 11:59:39.048985] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:40.314 [2024-07-21 11:59:39.048991] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:40.314 [2024-07-21 11:59:39.048997] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.314 [2024-07-21 11:59:39.049002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:40.314 [2024-07-21 11:59:39.049008] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:40.314 [2024-07-21 11:59:39.049014] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.314 [2024-07-21 11:59:39.049019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:40.314 [2024-07-21 11:59:39.049025] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:40.314 [2024-07-21 11:59:39.049031] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.314 [2024-07-21 11:59:39.049057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:40.314 [2024-07-21 11:59:39.049063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:40.314 [2024-07-21 11:59:39.049069] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.314 [2024-07-21 11:59:39.049074] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:40.314 [2024-07-21 11:59:39.049081] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:40.314 [2024-07-21 11:59:39.049086] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.314 [2024-07-21 11:59:39.049092] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:40.315 [2024-07-21 11:59:39.049098] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:40.315 [2024-07-21 11:59:39.049104] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.315 [2024-07-21 11:59:39.049110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:40.315 [2024-07-21 11:59:39.049115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:40.315 [2024-07-21 11:59:39.049122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.315 [2024-07-21 11:59:39.049128] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:40.315 [2024-07-21 11:59:39.049134] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:40.315 [2024-07-21 11:59:39.049140] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.315 [2024-07-21 11:59:39.049146] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:40.315 [2024-07-21 11:59:39.049155] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:40.315 [2024-07-21 11:59:39.049168] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.315 [2024-07-21 11:59:39.049182] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.315 [2024-07-21 11:59:39.049188] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:40.315 [2024-07-21 11:59:39.049195] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:40.315 [2024-07-21 11:59:39.049201] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:40.315 [2024-07-21 11:59:39.049208] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:40.315 [2024-07-21 11:59:39.049214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:40.315 [2024-07-21 11:59:39.049220] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:40.315 [2024-07-21 11:59:39.049227] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:40.315 [2024-07-21 11:59:39.049235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.315 [2024-07-21 11:59:39.049246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:40.315 [2024-07-21 11:59:39.049254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:40.315 [2024-07-21 11:59:39.049260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:40.315 [2024-07-21 11:59:39.049267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:40.315 [2024-07-21 11:59:39.049273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:40.315 [2024-07-21 11:59:39.049282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:40.315 [2024-07-21 11:59:39.049289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:40.315 [2024-07-21 11:59:39.049295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:40.315 [2024-07-21 11:59:39.049302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:40.315 [2024-07-21 11:59:39.049308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:40.315 [2024-07-21 11:59:39.049314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:40.315 [2024-07-21 11:59:39.049321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:40.315 [2024-07-21 11:59:39.049328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:40.315 [2024-07-21 11:59:39.049335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:40.315 [2024-07-21 11:59:39.049341] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:40.315 [2024-07-21 11:59:39.049355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.315 [2024-07-21 11:59:39.049364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:40.315 [2024-07-21 11:59:39.049371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:40.315 [2024-07-21 11:59:39.049378] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:40.315 [2024-07-21 11:59:39.049384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:40.315 [2024-07-21 11:59:39.049392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.049401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:40.315 [2024-07-21 11:59:39.049408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:17:40.315 [2024-07-21 11:59:39.049415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.067379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.067422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.315 [2024-07-21 11:59:39.067436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.948 ms 00:17:40.315 [2024-07-21 11:59:39.067449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.067589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.067601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:40.315 [2024-07-21 11:59:39.067611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:40.315 [2024-07-21 11:59:39.067620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.077602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.077636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.315 [2024-07-21 11:59:39.077647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.977 ms 00:17:40.315 [2024-07-21 11:59:39.077656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.077715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.077724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.315 [2024-07-21 11:59:39.077732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:40.315 [2024-07-21 11:59:39.077738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.078162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.078178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.315 [2024-07-21 11:59:39.078186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:17:40.315 [2024-07-21 11:59:39.078193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.078303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.078321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.315 [2024-07-21 11:59:39.078329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:40.315 [2024-07-21 11:59:39.078335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.084233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.084263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.315 [2024-07-21 11:59:39.084273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.890 ms 00:17:40.315 [2024-07-21 11:59:39.084280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.086842] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:40.315 [2024-07-21 11:59:39.086874] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:40.315 [2024-07-21 11:59:39.086894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.086905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:40.315 [2024-07-21 11:59:39.086913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.522 ms 00:17:40.315 [2024-07-21 11:59:39.086920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.098882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.098917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:40.315 [2024-07-21 11:59:39.098928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.941 ms 00:17:40.315 [2024-07-21 11:59:39.098940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.100678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.100709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:40.315 [2024-07-21 11:59:39.100718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.660 ms 00:17:40.315 [2024-07-21 11:59:39.100724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.102149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.102178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:40.315 [2024-07-21 11:59:39.102186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.388 ms 00:17:40.315 [2024-07-21 11:59:39.102193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.102468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.102485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:40.315 [2024-07-21 11:59:39.102494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:17:40.315 [2024-07-21 11:59:39.102501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.122953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.123014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:40.315 [2024-07-21 11:59:39.123029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.455 ms 00:17:40.315 [2024-07-21 11:59:39.123036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.129067] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:40.315 [2024-07-21 11:59:39.145337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.145387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:40.315 [2024-07-21 11:59:39.145399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.250 ms 00:17:40.315 [2024-07-21 11:59:39.145423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.145523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.145532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:40.315 [2024-07-21 11:59:39.145543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:40.315 [2024-07-21 11:59:39.145556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.315 [2024-07-21 11:59:39.145617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.315 [2024-07-21 11:59:39.145626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:40.316 [2024-07-21 11:59:39.145633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:40.316 [2024-07-21 11:59:39.145640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.316 [2024-07-21 11:59:39.145658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.316 [2024-07-21 11:59:39.145665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:40.316 [2024-07-21 11:59:39.145672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:40.316 [2024-07-21 11:59:39.145688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.316 [2024-07-21 11:59:39.145719] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:40.316 [2024-07-21 11:59:39.145734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.316 [2024-07-21 11:59:39.145741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:40.316 [2024-07-21 11:59:39.145748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:40.316 [2024-07-21 11:59:39.145756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.316 [2024-07-21 11:59:39.149546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.316 [2024-07-21 11:59:39.149579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:40.316 [2024-07-21 11:59:39.149589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.780 ms 00:17:40.316 [2024-07-21 11:59:39.149608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.316 [2024-07-21 11:59:39.149689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.316 [2024-07-21 11:59:39.149700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:40.316 [2024-07-21 11:59:39.149708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:40.316 [2024-07-21 11:59:39.149723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.316 [2024-07-21 11:59:39.150589] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:40.316 [2024-07-21 11:59:39.151529] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 118.167 ms, result 0 00:17:40.316 [2024-07-21 11:59:39.152272] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:40.316 [2024-07-21 11:59:39.162030] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:49.861  Copying: 30/256 [MB] (30 MBps) Copying: 57/256 [MB] (27 MBps) Copying: 84/256 [MB] (27 MBps) Copying: 111/256 [MB] (26 MBps) Copying: 138/256 [MB] (27 MBps) Copying: 166/256 [MB] (27 MBps) Copying: 193/256 [MB] (27 MBps) Copying: 220/256 [MB] (27 MBps) Copying: 247/256 [MB] (27 MBps) Copying: 256/256 [MB] (average 27 MBps)[2024-07-21 11:59:48.453888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:49.861 [2024-07-21 11:59:48.455354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.861 [2024-07-21 11:59:48.455387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:49.861 [2024-07-21 11:59:48.455400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:49.861 [2024-07-21 11:59:48.455407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.861 [2024-07-21 11:59:48.455426] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:49.861 [2024-07-21 11:59:48.456079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.861 [2024-07-21 11:59:48.456090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:49.862 [2024-07-21 11:59:48.456098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:17:49.862 [2024-07-21 11:59:48.456105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.862 [2024-07-21 11:59:48.456314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.862 [2024-07-21 11:59:48.456324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:49.862 [2024-07-21 11:59:48.456335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:17:49.862 [2024-07-21 11:59:48.456354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.862 [2024-07-21 11:59:48.459161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.862 [2024-07-21 11:59:48.459213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:49.862 [2024-07-21 11:59:48.459239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.799 ms 00:17:49.862 [2024-07-21 11:59:48.459258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.862 [2024-07-21 11:59:48.464710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.862 [2024-07-21 11:59:48.464772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:49.862 [2024-07-21 11:59:48.464821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.402 ms 00:17:49.862 [2024-07-21 11:59:48.464852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.862 [2024-07-21 11:59:48.466278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.862 [2024-07-21 11:59:48.466343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:49.862 [2024-07-21 11:59:48.466368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.382 ms 00:17:49.862 [2024-07-21 11:59:48.466388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.862 [2024-07-21 11:59:48.470688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.862 [2024-07-21 11:59:48.470756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:49.862 [2024-07-21 11:59:48.470803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.267 ms 00:17:49.862 [2024-07-21 11:59:48.470831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.862 [2024-07-21 11:59:48.470978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.862 [2024-07-21 11:59:48.471013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:49.862 [2024-07-21 11:59:48.471058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:49.862 [2024-07-21 11:59:48.471081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.862 [2024-07-21 11:59:48.473068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.862 [2024-07-21 11:59:48.473139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:49.862 [2024-07-21 11:59:48.473179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.940 ms 00:17:49.862 [2024-07-21 11:59:48.473197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.862 [2024-07-21 11:59:48.474631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.862 [2024-07-21 11:59:48.474693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:49.862 [2024-07-21 11:59:48.474724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:17:49.862 [2024-07-21 11:59:48.474742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.862 [2024-07-21 11:59:48.475916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.862 [2024-07-21 11:59:48.475976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:49.862 [2024-07-21 11:59:48.476015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.128 ms 00:17:49.862 [2024-07-21 11:59:48.476034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.862 [2024-07-21 11:59:48.477157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.862 [2024-07-21 11:59:48.477219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:49.862 [2024-07-21 11:59:48.477249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:17:49.862 [2024-07-21 11:59:48.477268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.862 [2024-07-21 11:59:48.477351] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:49.862 [2024-07-21 11:59:48.477387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.477427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.477474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.477523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.477558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.477609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.477650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.477712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.477751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.477800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.477860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.477902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.477948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:49.862 [2024-07-21 11:59:48.478532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:49.863 [2024-07-21 11:59:48.478805] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:49.863 [2024-07-21 11:59:48.478811] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f51f8a-8f80-4c71-8dcb-fbdd0567b8b7 00:17:49.863 [2024-07-21 11:59:48.478825] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:49.863 [2024-07-21 11:59:48.478832] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:49.863 [2024-07-21 11:59:48.478839] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:49.863 [2024-07-21 11:59:48.478846] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:49.863 [2024-07-21 11:59:48.478852] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:49.863 [2024-07-21 11:59:48.478863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:49.863 [2024-07-21 11:59:48.478881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:49.863 [2024-07-21 11:59:48.478896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:49.863 [2024-07-21 11:59:48.478902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:49.863 [2024-07-21 11:59:48.478909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.863 [2024-07-21 11:59:48.478915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:49.863 [2024-07-21 11:59:48.478922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.562 ms 00:17:49.863 [2024-07-21 11:59:48.478932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.863 [2024-07-21 11:59:48.480687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.863 [2024-07-21 11:59:48.480710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:49.863 [2024-07-21 11:59:48.480718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.736 ms 00:17:49.863 [2024-07-21 11:59:48.480729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.863 [2024-07-21 11:59:48.480848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:49.863 [2024-07-21 11:59:48.480858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:49.863 [2024-07-21 11:59:48.480865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:17:49.863 [2024-07-21 11:59:48.480871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.863 [2024-07-21 11:59:48.486468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.863 [2024-07-21 11:59:48.486517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:49.863 [2024-07-21 11:59:48.486546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.863 [2024-07-21 11:59:48.486565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.863 [2024-07-21 11:59:48.486631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.863 [2024-07-21 11:59:48.486651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:49.863 [2024-07-21 11:59:48.486669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.863 [2024-07-21 11:59:48.486686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.863 [2024-07-21 11:59:48.486738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.863 [2024-07-21 11:59:48.486784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:49.863 [2024-07-21 11:59:48.486802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.863 [2024-07-21 11:59:48.486837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.863 [2024-07-21 11:59:48.486868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.863 [2024-07-21 11:59:48.486924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:49.863 [2024-07-21 11:59:48.486967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.863 [2024-07-21 11:59:48.487009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.863 [2024-07-21 11:59:48.499301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.863 [2024-07-21 11:59:48.499422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:49.863 [2024-07-21 11:59:48.499470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.863 [2024-07-21 11:59:48.499493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.863 [2024-07-21 11:59:48.507575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.863 [2024-07-21 11:59:48.507674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:49.863 [2024-07-21 11:59:48.507702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.863 [2024-07-21 11:59:48.507721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.863 [2024-07-21 11:59:48.507759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.863 [2024-07-21 11:59:48.507795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:49.863 [2024-07-21 11:59:48.507814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.863 [2024-07-21 11:59:48.507840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.863 [2024-07-21 11:59:48.507884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.863 [2024-07-21 11:59:48.507911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:49.863 [2024-07-21 11:59:48.507929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.863 [2024-07-21 11:59:48.507947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.863 [2024-07-21 11:59:48.508041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.863 [2024-07-21 11:59:48.508087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:49.863 [2024-07-21 11:59:48.508116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.863 [2024-07-21 11:59:48.508135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.863 [2024-07-21 11:59:48.508191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.863 [2024-07-21 11:59:48.508231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:49.863 [2024-07-21 11:59:48.508261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.863 [2024-07-21 11:59:48.508280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.863 [2024-07-21 11:59:48.508332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.863 [2024-07-21 11:59:48.508366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:49.863 [2024-07-21 11:59:48.508393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.863 [2024-07-21 11:59:48.508422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.863 [2024-07-21 11:59:48.508495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:49.863 [2024-07-21 11:59:48.508528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:49.863 [2024-07-21 11:59:48.508566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:49.863 [2024-07-21 11:59:48.508583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:49.864 [2024-07-21 11:59:48.508739] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.463 ms, result 0 00:17:50.124 00:17:50.124 00:17:50.124 11:59:48 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:50.124 11:59:48 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:50.382 11:59:49 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:50.640 [2024-07-21 11:59:49.273897] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:50.640 [2024-07-21 11:59:49.274093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90116 ] 00:17:50.640 [2024-07-21 11:59:49.433851] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.640 [2024-07-21 11:59:49.477358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.899 [2024-07-21 11:59:49.577285] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:50.899 [2024-07-21 11:59:49.577429] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:50.899 [2024-07-21 11:59:49.723552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.899 [2024-07-21 11:59:49.723665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:50.899 [2024-07-21 11:59:49.723694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:50.899 [2024-07-21 11:59:49.723712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.899 [2024-07-21 11:59:49.725637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.899 [2024-07-21 11:59:49.725706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.899 [2024-07-21 11:59:49.725732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.886 ms 00:17:50.899 [2024-07-21 11:59:49.725751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.899 [2024-07-21 11:59:49.725861] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:50.899 [2024-07-21 11:59:49.726154] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:50.899 [2024-07-21 11:59:49.726212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.899 [2024-07-21 11:59:49.726242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.899 [2024-07-21 11:59:49.726262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:17:50.899 [2024-07-21 11:59:49.726303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.899 [2024-07-21 11:59:49.727761] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:50.899 [2024-07-21 11:59:49.730246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.899 [2024-07-21 11:59:49.730312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:50.899 [2024-07-21 11:59:49.730352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.491 ms 00:17:50.899 [2024-07-21 11:59:49.730371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.900 [2024-07-21 11:59:49.730446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.900 [2024-07-21 11:59:49.730473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:50.900 [2024-07-21 11:59:49.730492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:50.900 [2024-07-21 11:59:49.730513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.900 [2024-07-21 11:59:49.737147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.900 [2024-07-21 11:59:49.737203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.900 [2024-07-21 11:59:49.737227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.563 ms 00:17:50.900 [2024-07-21 11:59:49.737245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.900 [2024-07-21 11:59:49.737353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.900 [2024-07-21 11:59:49.737382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.900 [2024-07-21 11:59:49.737401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:50.900 [2024-07-21 11:59:49.737463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.900 [2024-07-21 11:59:49.737506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.900 [2024-07-21 11:59:49.737531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:50.900 [2024-07-21 11:59:49.737578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:50.900 [2024-07-21 11:59:49.737618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.900 [2024-07-21 11:59:49.737655] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:50.900 [2024-07-21 11:59:49.739278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.900 [2024-07-21 11:59:49.739346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.900 [2024-07-21 11:59:49.739380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.633 ms 00:17:50.900 [2024-07-21 11:59:49.739398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.900 [2024-07-21 11:59:49.739465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.900 [2024-07-21 11:59:49.739495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:50.900 [2024-07-21 11:59:49.739523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:50.900 [2024-07-21 11:59:49.739541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.900 [2024-07-21 11:59:49.739590] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:50.900 [2024-07-21 11:59:49.739629] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:50.900 [2024-07-21 11:59:49.739724] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:50.900 [2024-07-21 11:59:49.739775] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:50.900 [2024-07-21 11:59:49.739889] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:50.900 [2024-07-21 11:59:49.739935] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:50.900 [2024-07-21 11:59:49.739971] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:50.900 [2024-07-21 11:59:49.740021] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:50.900 [2024-07-21 11:59:49.740057] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:50.900 [2024-07-21 11:59:49.740115] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:50.900 [2024-07-21 11:59:49.740158] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:50.900 [2024-07-21 11:59:49.740187] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:50.900 [2024-07-21 11:59:49.740244] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:50.900 [2024-07-21 11:59:49.740264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.900 [2024-07-21 11:59:49.740284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:50.900 [2024-07-21 11:59:49.740322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:17:50.900 [2024-07-21 11:59:49.740348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.900 [2024-07-21 11:59:49.740424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.900 [2024-07-21 11:59:49.740433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:50.900 [2024-07-21 11:59:49.740440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:50.900 [2024-07-21 11:59:49.740446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.900 [2024-07-21 11:59:49.740529] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:50.900 [2024-07-21 11:59:49.740539] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:50.900 [2024-07-21 11:59:49.740554] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.900 [2024-07-21 11:59:49.740562] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.900 [2024-07-21 11:59:49.740570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:50.900 [2024-07-21 11:59:49.740576] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:50.900 [2024-07-21 11:59:49.740582] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:50.900 [2024-07-21 11:59:49.740588] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:50.900 [2024-07-21 11:59:49.740594] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:50.900 [2024-07-21 11:59:49.740601] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.900 [2024-07-21 11:59:49.740615] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:50.900 [2024-07-21 11:59:49.740622] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:50.900 [2024-07-21 11:59:49.740632] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.900 [2024-07-21 11:59:49.740638] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:50.900 [2024-07-21 11:59:49.740644] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:50.900 [2024-07-21 11:59:49.740650] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.900 [2024-07-21 11:59:49.740658] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:50.900 [2024-07-21 11:59:49.740665] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:50.900 [2024-07-21 11:59:49.740672] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.900 [2024-07-21 11:59:49.740678] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:50.900 [2024-07-21 11:59:49.740684] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:50.900 [2024-07-21 11:59:49.740690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.900 [2024-07-21 11:59:49.740696] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:50.900 [2024-07-21 11:59:49.740703] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:50.900 [2024-07-21 11:59:49.740709] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.900 [2024-07-21 11:59:49.740715] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:50.900 [2024-07-21 11:59:49.740721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:50.900 [2024-07-21 11:59:49.740727] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.900 [2024-07-21 11:59:49.740737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:50.900 [2024-07-21 11:59:49.740743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:50.900 [2024-07-21 11:59:49.740749] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.900 [2024-07-21 11:59:49.740755] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:50.900 [2024-07-21 11:59:49.740762] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:50.900 [2024-07-21 11:59:49.740769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.900 [2024-07-21 11:59:49.740775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:50.900 [2024-07-21 11:59:49.740781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:50.900 [2024-07-21 11:59:49.740787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.900 [2024-07-21 11:59:49.740794] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:50.900 [2024-07-21 11:59:49.740800] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:50.900 [2024-07-21 11:59:49.740806] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.900 [2024-07-21 11:59:49.740812] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:50.900 [2024-07-21 11:59:49.740859] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:50.900 [2024-07-21 11:59:49.740867] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.900 [2024-07-21 11:59:49.740874] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:50.900 [2024-07-21 11:59:49.740883] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:50.900 [2024-07-21 11:59:49.740890] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.900 [2024-07-21 11:59:49.740897] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.900 [2024-07-21 11:59:49.740904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:50.900 [2024-07-21 11:59:49.740910] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:50.900 [2024-07-21 11:59:49.740916] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:50.900 [2024-07-21 11:59:49.740923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:50.900 [2024-07-21 11:59:49.740929] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:50.900 [2024-07-21 11:59:49.740935] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:50.900 [2024-07-21 11:59:49.740943] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:50.900 [2024-07-21 11:59:49.740952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.900 [2024-07-21 11:59:49.740962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:50.900 [2024-07-21 11:59:49.740969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:50.900 [2024-07-21 11:59:49.740975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:50.900 [2024-07-21 11:59:49.740982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:50.900 [2024-07-21 11:59:49.740988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:50.900 [2024-07-21 11:59:49.740997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:50.900 [2024-07-21 11:59:49.741004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:50.901 [2024-07-21 11:59:49.741010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:50.901 [2024-07-21 11:59:49.741016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:50.901 [2024-07-21 11:59:49.741023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:50.901 [2024-07-21 11:59:49.741029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:50.901 [2024-07-21 11:59:49.741037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:50.901 [2024-07-21 11:59:49.741043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:50.901 [2024-07-21 11:59:49.741050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:50.901 [2024-07-21 11:59:49.741057] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:50.901 [2024-07-21 11:59:49.741065] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.901 [2024-07-21 11:59:49.741072] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:50.901 [2024-07-21 11:59:49.741086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:50.901 [2024-07-21 11:59:49.741093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:50.901 [2024-07-21 11:59:49.741099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:50.901 [2024-07-21 11:59:49.741107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.901 [2024-07-21 11:59:49.741115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:50.901 [2024-07-21 11:59:49.741122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:17:50.901 [2024-07-21 11:59:49.741129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.901 [2024-07-21 11:59:49.760466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.901 [2024-07-21 11:59:49.760509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:50.901 [2024-07-21 11:59:49.760523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.322 ms 00:17:50.901 [2024-07-21 11:59:49.760535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.901 [2024-07-21 11:59:49.760676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.901 [2024-07-21 11:59:49.760703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:50.901 [2024-07-21 11:59:49.760723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:50.901 [2024-07-21 11:59:49.760732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.159 [2024-07-21 11:59:49.770873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.159 [2024-07-21 11:59:49.770915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.159 [2024-07-21 11:59:49.770928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.132 ms 00:17:51.159 [2024-07-21 11:59:49.770941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.159 [2024-07-21 11:59:49.771011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.159 [2024-07-21 11:59:49.771037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.159 [2024-07-21 11:59:49.771048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:51.159 [2024-07-21 11:59:49.771068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.159 [2024-07-21 11:59:49.771487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.159 [2024-07-21 11:59:49.771508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.159 [2024-07-21 11:59:49.771516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:17:51.159 [2024-07-21 11:59:49.771524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.159 [2024-07-21 11:59:49.771629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.159 [2024-07-21 11:59:49.771640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.159 [2024-07-21 11:59:49.771647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:51.159 [2024-07-21 11:59:49.771654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.159 [2024-07-21 11:59:49.777588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.159 [2024-07-21 11:59:49.777620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:51.159 [2024-07-21 11:59:49.777629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.923 ms 00:17:51.159 [2024-07-21 11:59:49.777636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.159 [2024-07-21 11:59:49.780249] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:51.159 [2024-07-21 11:59:49.780284] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:51.159 [2024-07-21 11:59:49.780320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.159 [2024-07-21 11:59:49.780330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:51.159 [2024-07-21 11:59:49.780339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.593 ms 00:17:51.159 [2024-07-21 11:59:49.780346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.159 [2024-07-21 11:59:49.791998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.159 [2024-07-21 11:59:49.792032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:51.159 [2024-07-21 11:59:49.792043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.630 ms 00:17:51.159 [2024-07-21 11:59:49.792054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.159 [2024-07-21 11:59:49.793879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.159 [2024-07-21 11:59:49.793909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:51.159 [2024-07-21 11:59:49.793919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.760 ms 00:17:51.159 [2024-07-21 11:59:49.793925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.160 [2024-07-21 11:59:49.795334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.160 [2024-07-21 11:59:49.795364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:51.160 [2024-07-21 11:59:49.795372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.373 ms 00:17:51.160 [2024-07-21 11:59:49.795379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.160 [2024-07-21 11:59:49.795640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.160 [2024-07-21 11:59:49.795658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:51.160 [2024-07-21 11:59:49.795666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:17:51.160 [2024-07-21 11:59:49.795672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.160 [2024-07-21 11:59:49.815223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.160 [2024-07-21 11:59:49.815292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:51.160 [2024-07-21 11:59:49.815307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.564 ms 00:17:51.160 [2024-07-21 11:59:49.815332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.160 [2024-07-21 11:59:49.821037] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:51.160 [2024-07-21 11:59:49.836778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.160 [2024-07-21 11:59:49.836850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:51.160 [2024-07-21 11:59:49.836899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.400 ms 00:17:51.160 [2024-07-21 11:59:49.836907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.160 [2024-07-21 11:59:49.837032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.160 [2024-07-21 11:59:49.837043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:51.160 [2024-07-21 11:59:49.837055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:51.160 [2024-07-21 11:59:49.837062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.160 [2024-07-21 11:59:49.837122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.160 [2024-07-21 11:59:49.837131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:51.160 [2024-07-21 11:59:49.837139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:51.160 [2024-07-21 11:59:49.837145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.160 [2024-07-21 11:59:49.837165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.160 [2024-07-21 11:59:49.837172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:51.160 [2024-07-21 11:59:49.837180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:51.160 [2024-07-21 11:59:49.837189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.160 [2024-07-21 11:59:49.837220] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:51.160 [2024-07-21 11:59:49.837228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.160 [2024-07-21 11:59:49.837235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:51.160 [2024-07-21 11:59:49.837242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:51.160 [2024-07-21 11:59:49.837249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.160 [2024-07-21 11:59:49.840979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.160 [2024-07-21 11:59:49.841012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:51.160 [2024-07-21 11:59:49.841021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.709 ms 00:17:51.160 [2024-07-21 11:59:49.841050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.160 [2024-07-21 11:59:49.841133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.160 [2024-07-21 11:59:49.841143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:51.160 [2024-07-21 11:59:49.841152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:51.160 [2024-07-21 11:59:49.841159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.160 [2024-07-21 11:59:49.842000] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:51.160 [2024-07-21 11:59:49.842888] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 118.400 ms, result 0 00:17:51.160 [2024-07-21 11:59:49.843628] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:51.160 [2024-07-21 11:59:49.853478] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:51.160  Copying: 4096/4096 [kB] (average 24 MBps)[2024-07-21 11:59:50.018259] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:51.160 [2024-07-21 11:59:50.019390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.160 [2024-07-21 11:59:50.019423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:51.160 [2024-07-21 11:59:50.019435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:51.160 [2024-07-21 11:59:50.019450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.160 [2024-07-21 11:59:50.019470] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:51.160 [2024-07-21 11:59:50.020134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.160 [2024-07-21 11:59:50.020160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:51.160 [2024-07-21 11:59:50.020169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:17:51.160 [2024-07-21 11:59:50.020176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.160 [2024-07-21 11:59:50.022014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.160 [2024-07-21 11:59:50.022053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:51.160 [2024-07-21 11:59:50.022063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.823 ms 00:17:51.160 [2024-07-21 11:59:50.022069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.420 [2024-07-21 11:59:50.025192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.420 [2024-07-21 11:59:50.025219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:51.420 [2024-07-21 11:59:50.025227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.114 ms 00:17:51.420 [2024-07-21 11:59:50.025234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.420 [2024-07-21 11:59:50.030545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.420 [2024-07-21 11:59:50.030576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:51.420 [2024-07-21 11:59:50.030589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.296 ms 00:17:51.420 [2024-07-21 11:59:50.030596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.420 [2024-07-21 11:59:50.032177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.420 [2024-07-21 11:59:50.032212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:51.420 [2024-07-21 11:59:50.032221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.523 ms 00:17:51.420 [2024-07-21 11:59:50.032227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.420 [2024-07-21 11:59:50.036585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.420 [2024-07-21 11:59:50.036617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:51.420 [2024-07-21 11:59:50.036626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.341 ms 00:17:51.420 [2024-07-21 11:59:50.036667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.420 [2024-07-21 11:59:50.036768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.420 [2024-07-21 11:59:50.036777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:51.420 [2024-07-21 11:59:50.036797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:51.420 [2024-07-21 11:59:50.036804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.420 [2024-07-21 11:59:50.038942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.420 [2024-07-21 11:59:50.038971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:51.420 [2024-07-21 11:59:50.038979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.115 ms 00:17:51.420 [2024-07-21 11:59:50.038985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.420 [2024-07-21 11:59:50.040410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.420 [2024-07-21 11:59:50.040440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:51.420 [2024-07-21 11:59:50.040447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.385 ms 00:17:51.420 [2024-07-21 11:59:50.040454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.420 [2024-07-21 11:59:50.041522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.420 [2024-07-21 11:59:50.041553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:51.420 [2024-07-21 11:59:50.041561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.046 ms 00:17:51.420 [2024-07-21 11:59:50.041568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.420 [2024-07-21 11:59:50.042662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.420 [2024-07-21 11:59:50.042692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:51.420 [2024-07-21 11:59:50.042701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.051 ms 00:17:51.420 [2024-07-21 11:59:50.042707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.420 [2024-07-21 11:59:50.042729] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:51.420 [2024-07-21 11:59:50.042741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.042997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.043004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.043011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.043018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.043025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.043042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.043049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.043056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.043063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.043070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.043076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.043083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.043090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.043096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:51.420 [2024-07-21 11:59:50.043103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:51.421 [2024-07-21 11:59:50.043496] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:51.421 [2024-07-21 11:59:50.043503] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f51f8a-8f80-4c71-8dcb-fbdd0567b8b7 00:17:51.421 [2024-07-21 11:59:50.043511] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:51.421 [2024-07-21 11:59:50.043518] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:51.421 [2024-07-21 11:59:50.043524] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:51.421 [2024-07-21 11:59:50.043532] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:51.421 [2024-07-21 11:59:50.043541] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:51.421 [2024-07-21 11:59:50.043564] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:51.421 [2024-07-21 11:59:50.043571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:51.421 [2024-07-21 11:59:50.043591] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:51.421 [2024-07-21 11:59:50.043597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:51.421 [2024-07-21 11:59:50.043604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.421 [2024-07-21 11:59:50.043611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:51.421 [2024-07-21 11:59:50.043619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.877 ms 00:17:51.421 [2024-07-21 11:59:50.043630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.421 [2024-07-21 11:59:50.045296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.421 [2024-07-21 11:59:50.045314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:51.421 [2024-07-21 11:59:50.045322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.652 ms 00:17:51.421 [2024-07-21 11:59:50.045332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.421 [2024-07-21 11:59:50.045429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.421 [2024-07-21 11:59:50.045443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:51.421 [2024-07-21 11:59:50.045451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:51.421 [2024-07-21 11:59:50.045457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.421 [2024-07-21 11:59:50.051083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.421 [2024-07-21 11:59:50.051147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:51.421 [2024-07-21 11:59:50.051186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.421 [2024-07-21 11:59:50.051206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.421 [2024-07-21 11:59:50.051298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.421 [2024-07-21 11:59:50.051342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:51.421 [2024-07-21 11:59:50.051371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.421 [2024-07-21 11:59:50.051406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.421 [2024-07-21 11:59:50.051487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.421 [2024-07-21 11:59:50.051523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:51.421 [2024-07-21 11:59:50.051551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.421 [2024-07-21 11:59:50.051580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.421 [2024-07-21 11:59:50.051610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.421 [2024-07-21 11:59:50.051660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:51.421 [2024-07-21 11:59:50.051690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.421 [2024-07-21 11:59:50.051709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.421 [2024-07-21 11:59:50.064002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.421 [2024-07-21 11:59:50.064098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:51.421 [2024-07-21 11:59:50.064125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.421 [2024-07-21 11:59:50.064151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.421 [2024-07-21 11:59:50.072097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.421 [2024-07-21 11:59:50.072177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:51.421 [2024-07-21 11:59:50.072205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.421 [2024-07-21 11:59:50.072230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.421 [2024-07-21 11:59:50.072259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.421 [2024-07-21 11:59:50.072267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:51.421 [2024-07-21 11:59:50.072275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.421 [2024-07-21 11:59:50.072281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.421 [2024-07-21 11:59:50.072313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.421 [2024-07-21 11:59:50.072320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:51.422 [2024-07-21 11:59:50.072327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.422 [2024-07-21 11:59:50.072334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.422 [2024-07-21 11:59:50.072412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.422 [2024-07-21 11:59:50.072421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:51.422 [2024-07-21 11:59:50.072435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.422 [2024-07-21 11:59:50.072442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.422 [2024-07-21 11:59:50.072472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.422 [2024-07-21 11:59:50.072484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:51.422 [2024-07-21 11:59:50.072492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.422 [2024-07-21 11:59:50.072498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.422 [2024-07-21 11:59:50.072532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.422 [2024-07-21 11:59:50.072540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:51.422 [2024-07-21 11:59:50.072547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.422 [2024-07-21 11:59:50.072553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.422 [2024-07-21 11:59:50.072600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:51.422 [2024-07-21 11:59:50.072611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:51.422 [2024-07-21 11:59:50.072618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:51.422 [2024-07-21 11:59:50.072625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.422 [2024-07-21 11:59:50.072749] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.441 ms, result 0 00:17:51.681 00:17:51.681 00:17:51.681 11:59:50 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=90130 00:17:51.681 11:59:50 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:51.681 11:59:50 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 90130 00:17:51.681 11:59:50 ftl.ftl_trim -- common/autotest_common.sh@827 -- # '[' -z 90130 ']' 00:17:51.681 11:59:50 ftl.ftl_trim -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.681 11:59:50 ftl.ftl_trim -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:51.681 11:59:50 ftl.ftl_trim -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.681 11:59:50 ftl.ftl_trim -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:51.681 11:59:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:51.681 [2024-07-21 11:59:50.420338] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:51.681 [2024-07-21 11:59:50.420559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90130 ] 00:17:51.939 [2024-07-21 11:59:50.578041] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.939 [2024-07-21 11:59:50.621629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.507 11:59:51 ftl.ftl_trim -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:52.507 11:59:51 ftl.ftl_trim -- common/autotest_common.sh@860 -- # return 0 00:17:52.507 11:59:51 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:52.507 [2024-07-21 11:59:51.343446] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:52.507 [2024-07-21 11:59:51.343572] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:52.767 [2024-07-21 11:59:51.505344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.767 [2024-07-21 11:59:51.505462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:52.767 [2024-07-21 11:59:51.505495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:52.767 [2024-07-21 11:59:51.505514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.767 [2024-07-21 11:59:51.507449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.767 [2024-07-21 11:59:51.507524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.767 [2024-07-21 11:59:51.507553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.909 ms 00:17:52.767 [2024-07-21 11:59:51.507572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.767 [2024-07-21 11:59:51.507668] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:52.767 [2024-07-21 11:59:51.507932] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:52.767 [2024-07-21 11:59:51.508001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.767 [2024-07-21 11:59:51.508039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.767 [2024-07-21 11:59:51.508062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:17:52.768 [2024-07-21 11:59:51.508090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.509560] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:52.768 [2024-07-21 11:59:51.512036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.512106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:52.768 [2024-07-21 11:59:51.512149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.485 ms 00:17:52.768 [2024-07-21 11:59:51.512171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.512244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.512284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:52.768 [2024-07-21 11:59:51.512306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:52.768 [2024-07-21 11:59:51.512387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.519010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.519082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.768 [2024-07-21 11:59:51.519119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.578 ms 00:17:52.768 [2024-07-21 11:59:51.519140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.519241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.519273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.768 [2024-07-21 11:59:51.519299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:52.768 [2024-07-21 11:59:51.519323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.519375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.519398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:52.768 [2024-07-21 11:59:51.519441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:52.768 [2024-07-21 11:59:51.519479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.519516] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:52.768 [2024-07-21 11:59:51.521128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.521181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.768 [2024-07-21 11:59:51.521214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:17:52.768 [2024-07-21 11:59:51.521235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.521301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.521312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:52.768 [2024-07-21 11:59:51.521321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:52.768 [2024-07-21 11:59:51.521328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.521349] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:52.768 [2024-07-21 11:59:51.521367] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:52.768 [2024-07-21 11:59:51.521407] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:52.768 [2024-07-21 11:59:51.521425] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:52.768 [2024-07-21 11:59:51.521501] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:52.768 [2024-07-21 11:59:51.521514] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:52.768 [2024-07-21 11:59:51.521525] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:52.768 [2024-07-21 11:59:51.521535] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:52.768 [2024-07-21 11:59:51.521544] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:52.768 [2024-07-21 11:59:51.521552] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:52.768 [2024-07-21 11:59:51.521562] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:52.768 [2024-07-21 11:59:51.521569] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:52.768 [2024-07-21 11:59:51.521577] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:52.768 [2024-07-21 11:59:51.521586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.521594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:52.768 [2024-07-21 11:59:51.521601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:17:52.768 [2024-07-21 11:59:51.521609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.521672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.521681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:52.768 [2024-07-21 11:59:51.521687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:17:52.768 [2024-07-21 11:59:51.521702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.521777] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:52.768 [2024-07-21 11:59:51.521788] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:52.768 [2024-07-21 11:59:51.521795] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:52.768 [2024-07-21 11:59:51.521804] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.768 [2024-07-21 11:59:51.521811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:52.768 [2024-07-21 11:59:51.521836] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:52.768 [2024-07-21 11:59:51.521844] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:52.768 [2024-07-21 11:59:51.521852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:52.768 [2024-07-21 11:59:51.521858] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:52.768 [2024-07-21 11:59:51.521866] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:52.768 [2024-07-21 11:59:51.521872] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:52.768 [2024-07-21 11:59:51.521881] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:52.768 [2024-07-21 11:59:51.521886] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:52.768 [2024-07-21 11:59:51.521894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:52.768 [2024-07-21 11:59:51.521899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:52.768 [2024-07-21 11:59:51.521907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.768 [2024-07-21 11:59:51.521913] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:52.768 [2024-07-21 11:59:51.521920] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:52.768 [2024-07-21 11:59:51.521926] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.768 [2024-07-21 11:59:51.521942] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:52.768 [2024-07-21 11:59:51.521948] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:52.768 [2024-07-21 11:59:51.521957] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.768 [2024-07-21 11:59:51.521962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:52.768 [2024-07-21 11:59:51.521970] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:52.768 [2024-07-21 11:59:51.521975] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.768 [2024-07-21 11:59:51.521983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:52.768 [2024-07-21 11:59:51.521989] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:52.768 [2024-07-21 11:59:51.521995] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.768 [2024-07-21 11:59:51.522001] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:52.768 [2024-07-21 11:59:51.522010] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:52.768 [2024-07-21 11:59:51.522016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:52.768 [2024-07-21 11:59:51.522024] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:52.768 [2024-07-21 11:59:51.522029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:52.768 [2024-07-21 11:59:51.522038] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:52.768 [2024-07-21 11:59:51.522044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:52.768 [2024-07-21 11:59:51.522059] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:52.768 [2024-07-21 11:59:51.522064] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:52.768 [2024-07-21 11:59:51.522073] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:52.768 [2024-07-21 11:59:51.522079] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:52.768 [2024-07-21 11:59:51.522101] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.768 [2024-07-21 11:59:51.522107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:52.768 [2024-07-21 11:59:51.522115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:52.768 [2024-07-21 11:59:51.522120] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.768 [2024-07-21 11:59:51.522128] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:52.768 [2024-07-21 11:59:51.522135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:52.768 [2024-07-21 11:59:51.522142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:52.768 [2024-07-21 11:59:51.522154] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:52.768 [2024-07-21 11:59:51.522163] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:52.768 [2024-07-21 11:59:51.522169] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:52.768 [2024-07-21 11:59:51.522177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:52.768 [2024-07-21 11:59:51.522183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:52.768 [2024-07-21 11:59:51.522191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:52.768 [2024-07-21 11:59:51.522197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:52.768 [2024-07-21 11:59:51.522208] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:52.768 [2024-07-21 11:59:51.522222] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:52.768 [2024-07-21 11:59:51.522231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:52.768 [2024-07-21 11:59:51.522238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:52.768 [2024-07-21 11:59:51.522246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:52.768 [2024-07-21 11:59:51.522253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:52.768 [2024-07-21 11:59:51.522261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:52.768 [2024-07-21 11:59:51.522268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:52.768 [2024-07-21 11:59:51.522293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:52.768 [2024-07-21 11:59:51.522300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:52.768 [2024-07-21 11:59:51.522308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:52.768 [2024-07-21 11:59:51.522315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:52.768 [2024-07-21 11:59:51.522323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:52.768 [2024-07-21 11:59:51.522331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:52.768 [2024-07-21 11:59:51.522338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:52.768 [2024-07-21 11:59:51.522345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:52.768 [2024-07-21 11:59:51.522354] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:52.768 [2024-07-21 11:59:51.522369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:52.768 [2024-07-21 11:59:51.522380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:52.768 [2024-07-21 11:59:51.522387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:52.768 [2024-07-21 11:59:51.522395] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:52.768 [2024-07-21 11:59:51.522402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:52.768 [2024-07-21 11:59:51.522413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.522421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:52.768 [2024-07-21 11:59:51.522430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:17:52.768 [2024-07-21 11:59:51.522437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.534133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.534185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.768 [2024-07-21 11:59:51.534199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.656 ms 00:17:52.768 [2024-07-21 11:59:51.534222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.534334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.534346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:52.768 [2024-07-21 11:59:51.534357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:52.768 [2024-07-21 11:59:51.534364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.544287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.544325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.768 [2024-07-21 11:59:51.544353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.921 ms 00:17:52.768 [2024-07-21 11:59:51.544360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.544425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.544444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.768 [2024-07-21 11:59:51.544455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:52.768 [2024-07-21 11:59:51.544462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.545007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.545036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.768 [2024-07-21 11:59:51.545057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.525 ms 00:17:52.768 [2024-07-21 11:59:51.545075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.545194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.545233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.768 [2024-07-21 11:59:51.545265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:52.768 [2024-07-21 11:59:51.545299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.552113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.552201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.768 [2024-07-21 11:59:51.552236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.789 ms 00:17:52.768 [2024-07-21 11:59:51.552257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.554854] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:52.768 [2024-07-21 11:59:51.554947] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:52.768 [2024-07-21 11:59:51.554985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.555006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:52.768 [2024-07-21 11:59:51.555027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.613 ms 00:17:52.768 [2024-07-21 11:59:51.555045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.567050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.567126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:52.768 [2024-07-21 11:59:51.567159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.955 ms 00:17:52.768 [2024-07-21 11:59:51.567179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.568885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.568949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:52.768 [2024-07-21 11:59:51.568979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.626 ms 00:17:52.768 [2024-07-21 11:59:51.568998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.570414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.570478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:52.768 [2024-07-21 11:59:51.570492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.364 ms 00:17:52.768 [2024-07-21 11:59:51.570499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.570769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.570781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:52.768 [2024-07-21 11:59:51.570802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.204 ms 00:17:52.768 [2024-07-21 11:59:51.570815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.768 [2024-07-21 11:59:51.607518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.768 [2024-07-21 11:59:51.607617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:52.768 [2024-07-21 11:59:51.607647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.717 ms 00:17:52.769 [2024-07-21 11:59:51.607663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.769 [2024-07-21 11:59:51.616719] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:53.028 [2024-07-21 11:59:51.633330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.028 [2024-07-21 11:59:51.633393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:53.028 [2024-07-21 11:59:51.633406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.581 ms 00:17:53.028 [2024-07-21 11:59:51.633416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.028 [2024-07-21 11:59:51.633507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.028 [2024-07-21 11:59:51.633519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:53.028 [2024-07-21 11:59:51.633530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:53.028 [2024-07-21 11:59:51.633538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.028 [2024-07-21 11:59:51.633589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.028 [2024-07-21 11:59:51.633599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:53.028 [2024-07-21 11:59:51.633607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:53.028 [2024-07-21 11:59:51.633615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.028 [2024-07-21 11:59:51.633639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.028 [2024-07-21 11:59:51.633667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:53.028 [2024-07-21 11:59:51.633675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:53.028 [2024-07-21 11:59:51.633686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.028 [2024-07-21 11:59:51.633718] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:53.028 [2024-07-21 11:59:51.633728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.028 [2024-07-21 11:59:51.633735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:53.028 [2024-07-21 11:59:51.633743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:53.028 [2024-07-21 11:59:51.633750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.028 [2024-07-21 11:59:51.637432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.028 [2024-07-21 11:59:51.637468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:53.028 [2024-07-21 11:59:51.637488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.665 ms 00:17:53.028 [2024-07-21 11:59:51.637495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.028 [2024-07-21 11:59:51.637587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.028 [2024-07-21 11:59:51.637597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:53.028 [2024-07-21 11:59:51.637606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:53.028 [2024-07-21 11:59:51.637613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.028 [2024-07-21 11:59:51.638517] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:53.028 [2024-07-21 11:59:51.639500] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 133.130 ms, result 0 00:17:53.028 [2024-07-21 11:59:51.640465] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:53.028 Some configs were skipped because the RPC state that can call them passed over. 00:17:53.028 11:59:51 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:53.028 [2024-07-21 11:59:51.843188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.028 [2024-07-21 11:59:51.843318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:53.028 [2024-07-21 11:59:51.843374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.699 ms 00:17:53.028 [2024-07-21 11:59:51.843399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.028 [2024-07-21 11:59:51.843495] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.001 ms, result 0 00:17:53.028 true 00:17:53.028 11:59:51 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:53.287 [2024-07-21 11:59:52.018495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.287 [2024-07-21 11:59:52.018650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:53.287 [2024-07-21 11:59:52.018674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:17:53.288 [2024-07-21 11:59:52.018683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.288 [2024-07-21 11:59:52.018728] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.394 ms, result 0 00:17:53.288 true 00:17:53.288 11:59:52 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 90130 00:17:53.288 11:59:52 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 90130 ']' 00:17:53.288 11:59:52 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 90130 00:17:53.288 11:59:52 ftl.ftl_trim -- common/autotest_common.sh@951 -- # uname 00:17:53.288 11:59:52 ftl.ftl_trim -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:53.288 11:59:52 ftl.ftl_trim -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90130 00:17:53.288 killing process with pid 90130 00:17:53.288 11:59:52 ftl.ftl_trim -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:53.288 11:59:52 ftl.ftl_trim -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:53.288 11:59:52 ftl.ftl_trim -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90130' 00:17:53.288 11:59:52 ftl.ftl_trim -- common/autotest_common.sh@965 -- # kill 90130 00:17:53.288 11:59:52 ftl.ftl_trim -- common/autotest_common.sh@970 -- # wait 90130 00:17:53.549 [2024-07-21 11:59:52.211250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.549 [2024-07-21 11:59:52.211318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:53.549 [2024-07-21 11:59:52.211332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:53.549 [2024-07-21 11:59:52.211340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.549 [2024-07-21 11:59:52.211362] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:53.549 [2024-07-21 11:59:52.212029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.549 [2024-07-21 11:59:52.212053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:53.549 [2024-07-21 11:59:52.212063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:17:53.549 [2024-07-21 11:59:52.212072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.549 [2024-07-21 11:59:52.212307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.549 [2024-07-21 11:59:52.212316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:53.549 [2024-07-21 11:59:52.212325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:17:53.549 [2024-07-21 11:59:52.212332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.549 [2024-07-21 11:59:52.215544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.549 [2024-07-21 11:59:52.215584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:53.549 [2024-07-21 11:59:52.215610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.197 ms 00:17:53.549 [2024-07-21 11:59:52.215618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.549 [2024-07-21 11:59:52.221433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.549 [2024-07-21 11:59:52.221471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:53.549 [2024-07-21 11:59:52.221483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.778 ms 00:17:53.549 [2024-07-21 11:59:52.221491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.549 [2024-07-21 11:59:52.223008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.549 [2024-07-21 11:59:52.223043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:53.549 [2024-07-21 11:59:52.223054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:17:53.549 [2024-07-21 11:59:52.223061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.549 [2024-07-21 11:59:52.227905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.549 [2024-07-21 11:59:52.227944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:53.549 [2024-07-21 11:59:52.227955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.783 ms 00:17:53.549 [2024-07-21 11:59:52.227979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.549 [2024-07-21 11:59:52.228099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.549 [2024-07-21 11:59:52.228109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:53.549 [2024-07-21 11:59:52.228119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:53.549 [2024-07-21 11:59:52.228127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.549 [2024-07-21 11:59:52.230281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.549 [2024-07-21 11:59:52.230315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:53.549 [2024-07-21 11:59:52.230329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.136 ms 00:17:53.549 [2024-07-21 11:59:52.230336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.549 [2024-07-21 11:59:52.231844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.549 [2024-07-21 11:59:52.231875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:53.549 [2024-07-21 11:59:52.231887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.472 ms 00:17:53.549 [2024-07-21 11:59:52.231893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.549 [2024-07-21 11:59:52.233033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.549 [2024-07-21 11:59:52.233064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:53.549 [2024-07-21 11:59:52.233075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:17:53.549 [2024-07-21 11:59:52.233081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.549 [2024-07-21 11:59:52.234172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.549 [2024-07-21 11:59:52.234204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:53.549 [2024-07-21 11:59:52.234215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:17:53.549 [2024-07-21 11:59:52.234222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.549 [2024-07-21 11:59:52.234252] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:53.549 [2024-07-21 11:59:52.234267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:53.549 [2024-07-21 11:59:52.234278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:53.549 [2024-07-21 11:59:52.234286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:53.549 [2024-07-21 11:59:52.234298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:53.549 [2024-07-21 11:59:52.234306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:53.549 [2024-07-21 11:59:52.234315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:53.549 [2024-07-21 11:59:52.234323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:53.549 [2024-07-21 11:59:52.234331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:53.549 [2024-07-21 11:59:52.234340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:53.549 [2024-07-21 11:59:52.234350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:53.549 [2024-07-21 11:59:52.234358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:53.549 [2024-07-21 11:59:52.234367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:53.549 [2024-07-21 11:59:52.234374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:53.549 [2024-07-21 11:59:52.234383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.234999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:53.550 [2024-07-21 11:59:52.235224] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:53.550 [2024-07-21 11:59:52.235234] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f51f8a-8f80-4c71-8dcb-fbdd0567b8b7 00:17:53.550 [2024-07-21 11:59:52.235241] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:53.550 [2024-07-21 11:59:52.235250] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:53.550 [2024-07-21 11:59:52.235259] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:53.550 [2024-07-21 11:59:52.235268] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:53.550 [2024-07-21 11:59:52.235275] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:53.550 [2024-07-21 11:59:52.235283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:53.550 [2024-07-21 11:59:52.235292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:53.550 [2024-07-21 11:59:52.235300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:53.550 [2024-07-21 11:59:52.235306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:53.550 [2024-07-21 11:59:52.235315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.550 [2024-07-21 11:59:52.235322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:53.551 [2024-07-21 11:59:52.235333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.067 ms 00:17:53.551 [2024-07-21 11:59:52.235347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.237098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.551 [2024-07-21 11:59:52.237117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:53.551 [2024-07-21 11:59:52.237127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.731 ms 00:17:53.551 [2024-07-21 11:59:52.237134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.237239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.551 [2024-07-21 11:59:52.237247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:53.551 [2024-07-21 11:59:52.237256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:17:53.551 [2024-07-21 11:59:52.237263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.243577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.551 [2024-07-21 11:59:52.243602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:53.551 [2024-07-21 11:59:52.243613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.551 [2024-07-21 11:59:52.243623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.243701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.551 [2024-07-21 11:59:52.243718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:53.551 [2024-07-21 11:59:52.243729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.551 [2024-07-21 11:59:52.243736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.243782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.551 [2024-07-21 11:59:52.243793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:53.551 [2024-07-21 11:59:52.243802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.551 [2024-07-21 11:59:52.243810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.243856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.551 [2024-07-21 11:59:52.243872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:53.551 [2024-07-21 11:59:52.243881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.551 [2024-07-21 11:59:52.243889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.257135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.551 [2024-07-21 11:59:52.257178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:53.551 [2024-07-21 11:59:52.257189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.551 [2024-07-21 11:59:52.257196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.265245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.551 [2024-07-21 11:59:52.265281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:53.551 [2024-07-21 11:59:52.265294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.551 [2024-07-21 11:59:52.265317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.265364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.551 [2024-07-21 11:59:52.265373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:53.551 [2024-07-21 11:59:52.265384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.551 [2024-07-21 11:59:52.265391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.265423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.551 [2024-07-21 11:59:52.265430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:53.551 [2024-07-21 11:59:52.265448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.551 [2024-07-21 11:59:52.265456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.265529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.551 [2024-07-21 11:59:52.265539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:53.551 [2024-07-21 11:59:52.265548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.551 [2024-07-21 11:59:52.265558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.265593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.551 [2024-07-21 11:59:52.265602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:53.551 [2024-07-21 11:59:52.265611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.551 [2024-07-21 11:59:52.265617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.265658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.551 [2024-07-21 11:59:52.265667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:53.551 [2024-07-21 11:59:52.265676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.551 [2024-07-21 11:59:52.265684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.265732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:53.551 [2024-07-21 11:59:52.265741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:53.551 [2024-07-21 11:59:52.265749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:53.551 [2024-07-21 11:59:52.265756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.551 [2024-07-21 11:59:52.265912] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 54.741 ms, result 0 00:17:53.811 11:59:52 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:53.811 [2024-07-21 11:59:52.598795] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:17:53.811 [2024-07-21 11:59:52.598925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90167 ] 00:17:54.070 [2024-07-21 11:59:52.758427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.070 [2024-07-21 11:59:52.804736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.070 [2024-07-21 11:59:52.904329] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.070 [2024-07-21 11:59:52.904397] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.331 [2024-07-21 11:59:53.050044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.331 [2024-07-21 11:59:53.050092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:54.331 [2024-07-21 11:59:53.050132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:54.331 [2024-07-21 11:59:53.050139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.331 [2024-07-21 11:59:53.052061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.331 [2024-07-21 11:59:53.052097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.331 [2024-07-21 11:59:53.052107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.908 ms 00:17:54.331 [2024-07-21 11:59:53.052113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.331 [2024-07-21 11:59:53.052181] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:54.331 [2024-07-21 11:59:53.052388] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:54.331 [2024-07-21 11:59:53.052408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.331 [2024-07-21 11:59:53.052415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.331 [2024-07-21 11:59:53.052423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:17:54.331 [2024-07-21 11:59:53.052432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.331 [2024-07-21 11:59:53.053886] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:54.331 [2024-07-21 11:59:53.056246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.331 [2024-07-21 11:59:53.056277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:54.331 [2024-07-21 11:59:53.056288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.367 ms 00:17:54.331 [2024-07-21 11:59:53.056321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.331 [2024-07-21 11:59:53.056381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.331 [2024-07-21 11:59:53.056391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:54.331 [2024-07-21 11:59:53.056399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:54.331 [2024-07-21 11:59:53.056421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.331 [2024-07-21 11:59:53.063046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.331 [2024-07-21 11:59:53.063071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.331 [2024-07-21 11:59:53.063079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.582 ms 00:17:54.331 [2024-07-21 11:59:53.063086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.331 [2024-07-21 11:59:53.063206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.331 [2024-07-21 11:59:53.063218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.331 [2024-07-21 11:59:53.063226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:17:54.331 [2024-07-21 11:59:53.063236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.331 [2024-07-21 11:59:53.063264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.331 [2024-07-21 11:59:53.063274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:54.331 [2024-07-21 11:59:53.063282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:54.331 [2024-07-21 11:59:53.063288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.331 [2024-07-21 11:59:53.063315] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:54.331 [2024-07-21 11:59:53.064884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.331 [2024-07-21 11:59:53.064913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.331 [2024-07-21 11:59:53.064925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:17:54.331 [2024-07-21 11:59:53.064939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.331 [2024-07-21 11:59:53.064979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.331 [2024-07-21 11:59:53.064987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:54.331 [2024-07-21 11:59:53.065005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:54.331 [2024-07-21 11:59:53.065011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.331 [2024-07-21 11:59:53.065027] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:54.331 [2024-07-21 11:59:53.065052] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:54.331 [2024-07-21 11:59:53.065088] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:54.331 [2024-07-21 11:59:53.065108] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:17:54.331 [2024-07-21 11:59:53.065187] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:54.331 [2024-07-21 11:59:53.065196] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:54.331 [2024-07-21 11:59:53.065204] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:17:54.331 [2024-07-21 11:59:53.065212] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:54.331 [2024-07-21 11:59:53.065220] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:54.331 [2024-07-21 11:59:53.065233] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:54.331 [2024-07-21 11:59:53.065239] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:54.331 [2024-07-21 11:59:53.065253] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:54.331 [2024-07-21 11:59:53.065261] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:54.331 [2024-07-21 11:59:53.065268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.331 [2024-07-21 11:59:53.065275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:54.331 [2024-07-21 11:59:53.065282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:17:54.331 [2024-07-21 11:59:53.065288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.331 [2024-07-21 11:59:53.065359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.331 [2024-07-21 11:59:53.065367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:54.331 [2024-07-21 11:59:53.065374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:54.331 [2024-07-21 11:59:53.065379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.331 [2024-07-21 11:59:53.065454] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:54.331 [2024-07-21 11:59:53.065463] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:54.331 [2024-07-21 11:59:53.065470] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.331 [2024-07-21 11:59:53.065486] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.331 [2024-07-21 11:59:53.065492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:54.331 [2024-07-21 11:59:53.065498] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:54.331 [2024-07-21 11:59:53.065504] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:54.331 [2024-07-21 11:59:53.065511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:54.331 [2024-07-21 11:59:53.065517] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:54.331 [2024-07-21 11:59:53.065523] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.331 [2024-07-21 11:59:53.065538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:54.331 [2024-07-21 11:59:53.065544] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:54.331 [2024-07-21 11:59:53.065553] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.331 [2024-07-21 11:59:53.065559] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:54.331 [2024-07-21 11:59:53.065565] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:54.331 [2024-07-21 11:59:53.065571] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.331 [2024-07-21 11:59:53.065576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:54.331 [2024-07-21 11:59:53.065582] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:54.331 [2024-07-21 11:59:53.065587] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.331 [2024-07-21 11:59:53.065594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:54.331 [2024-07-21 11:59:53.065599] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:54.331 [2024-07-21 11:59:53.065605] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.331 [2024-07-21 11:59:53.065610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:54.331 [2024-07-21 11:59:53.065615] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:54.331 [2024-07-21 11:59:53.065621] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.331 [2024-07-21 11:59:53.065627] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:54.331 [2024-07-21 11:59:53.065632] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:54.331 [2024-07-21 11:59:53.065637] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.331 [2024-07-21 11:59:53.065647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:54.331 [2024-07-21 11:59:53.065652] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:54.331 [2024-07-21 11:59:53.065658] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.331 [2024-07-21 11:59:53.065663] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:54.331 [2024-07-21 11:59:53.065668] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:54.331 [2024-07-21 11:59:53.065674] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.331 [2024-07-21 11:59:53.065679] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:54.331 [2024-07-21 11:59:53.065685] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:54.331 [2024-07-21 11:59:53.065690] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.331 [2024-07-21 11:59:53.065695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:54.331 [2024-07-21 11:59:53.065701] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:54.331 [2024-07-21 11:59:53.065707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.331 [2024-07-21 11:59:53.065712] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:54.331 [2024-07-21 11:59:53.065719] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:54.331 [2024-07-21 11:59:53.065725] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.331 [2024-07-21 11:59:53.065731] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:54.331 [2024-07-21 11:59:53.065739] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:54.331 [2024-07-21 11:59:53.065745] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.331 [2024-07-21 11:59:53.065751] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.331 [2024-07-21 11:59:53.065757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:54.331 [2024-07-21 11:59:53.065763] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:54.331 [2024-07-21 11:59:53.065769] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:54.331 [2024-07-21 11:59:53.065775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:54.331 [2024-07-21 11:59:53.065781] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:54.331 [2024-07-21 11:59:53.065786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:54.331 [2024-07-21 11:59:53.065793] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:54.331 [2024-07-21 11:59:53.065801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.331 [2024-07-21 11:59:53.065832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:54.331 [2024-07-21 11:59:53.065856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:54.331 [2024-07-21 11:59:53.065863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:54.331 [2024-07-21 11:59:53.065869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:54.331 [2024-07-21 11:59:53.065876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:54.331 [2024-07-21 11:59:53.065892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:54.331 [2024-07-21 11:59:53.065899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:54.331 [2024-07-21 11:59:53.065905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:54.331 [2024-07-21 11:59:53.065911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:54.331 [2024-07-21 11:59:53.065918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:54.332 [2024-07-21 11:59:53.065924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:54.332 [2024-07-21 11:59:53.065930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:54.332 [2024-07-21 11:59:53.065937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:54.332 [2024-07-21 11:59:53.065943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:54.332 [2024-07-21 11:59:53.065949] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:54.332 [2024-07-21 11:59:53.065957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.332 [2024-07-21 11:59:53.065965] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:54.332 [2024-07-21 11:59:53.065971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:54.332 [2024-07-21 11:59:53.065978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:54.332 [2024-07-21 11:59:53.065984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:54.332 [2024-07-21 11:59:53.065992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.066001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:54.332 [2024-07-21 11:59:53.066008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:17:54.332 [2024-07-21 11:59:53.066023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.085381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.085415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.332 [2024-07-21 11:59:53.085456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.346 ms 00:17:54.332 [2024-07-21 11:59:53.085473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.085590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.085599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:54.332 [2024-07-21 11:59:53.085606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:54.332 [2024-07-21 11:59:53.085613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.095553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.095594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.332 [2024-07-21 11:59:53.095616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.940 ms 00:17:54.332 [2024-07-21 11:59:53.095642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.095700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.095709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.332 [2024-07-21 11:59:53.095716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.332 [2024-07-21 11:59:53.095723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.096154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.096168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.332 [2024-07-21 11:59:53.096176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:17:54.332 [2024-07-21 11:59:53.096183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.096292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.096310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.332 [2024-07-21 11:59:53.096326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:54.332 [2024-07-21 11:59:53.096340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.102355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.102387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.332 [2024-07-21 11:59:53.102404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.007 ms 00:17:54.332 [2024-07-21 11:59:53.102411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.104962] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:54.332 [2024-07-21 11:59:53.104995] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:54.332 [2024-07-21 11:59:53.105031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.105040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:54.332 [2024-07-21 11:59:53.105049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.531 ms 00:17:54.332 [2024-07-21 11:59:53.105055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.116668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.116705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:54.332 [2024-07-21 11:59:53.116716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.590 ms 00:17:54.332 [2024-07-21 11:59:53.116743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.118580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.118613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:54.332 [2024-07-21 11:59:53.118633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.770 ms 00:17:54.332 [2024-07-21 11:59:53.118639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.120068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.120098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:54.332 [2024-07-21 11:59:53.120106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.375 ms 00:17:54.332 [2024-07-21 11:59:53.120112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.120369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.120386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.332 [2024-07-21 11:59:53.120395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:17:54.332 [2024-07-21 11:59:53.120402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.140142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.140210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:54.332 [2024-07-21 11:59:53.140224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.755 ms 00:17:54.332 [2024-07-21 11:59:53.140232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.146026] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:54.332 [2024-07-21 11:59:53.161787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.161857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.332 [2024-07-21 11:59:53.161869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.515 ms 00:17:54.332 [2024-07-21 11:59:53.161877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.161967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.161984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:54.332 [2024-07-21 11:59:53.161996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:54.332 [2024-07-21 11:59:53.162003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.162061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.162070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.332 [2024-07-21 11:59:53.162077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:54.332 [2024-07-21 11:59:53.162084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.162103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.162110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:54.332 [2024-07-21 11:59:53.162117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:54.332 [2024-07-21 11:59:53.162132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.162162] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:54.332 [2024-07-21 11:59:53.162170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.162176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:54.332 [2024-07-21 11:59:53.162183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:54.332 [2024-07-21 11:59:53.162190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.166136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.166208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:54.332 [2024-07-21 11:59:53.166252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.936 ms 00:17:54.332 [2024-07-21 11:59:53.166277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.166373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.332 [2024-07-21 11:59:53.166400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:54.332 [2024-07-21 11:59:53.166420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:54.332 [2024-07-21 11:59:53.166468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.332 [2024-07-21 11:59:53.167543] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.332 [2024-07-21 11:59:53.168544] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.436 ms, result 0 00:17:54.332 [2024-07-21 11:59:53.169287] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.332 [2024-07-21 11:59:53.178680] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:03.920  Copying: 30/256 [MB] (30 MBps) Copying: 58/256 [MB] (28 MBps) Copying: 86/256 [MB] (28 MBps) Copying: 114/256 [MB] (27 MBps) Copying: 142/256 [MB] (27 MBps) Copying: 169/256 [MB] (27 MBps) Copying: 197/256 [MB] (27 MBps) Copying: 224/256 [MB] (26 MBps) Copying: 251/256 [MB] (27 MBps) Copying: 256/256 [MB] (average 27 MBps)[2024-07-21 12:00:02.673899] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:03.920 [2024-07-21 12:00:02.676374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.920 [2024-07-21 12:00:02.676446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:03.920 [2024-07-21 12:00:02.676474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:03.920 [2024-07-21 12:00:02.676491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.920 [2024-07-21 12:00:02.676538] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:03.920 [2024-07-21 12:00:02.677389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.920 [2024-07-21 12:00:02.677437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:03.920 [2024-07-21 12:00:02.677480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:18:03.920 [2024-07-21 12:00:02.677497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.920 [2024-07-21 12:00:02.678062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.920 [2024-07-21 12:00:02.678094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:03.920 [2024-07-21 12:00:02.678121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:18:03.920 [2024-07-21 12:00:02.678137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.920 [2024-07-21 12:00:02.684851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.920 [2024-07-21 12:00:02.684894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:03.921 [2024-07-21 12:00:02.684910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.690 ms 00:18:03.921 [2024-07-21 12:00:02.684921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.921 [2024-07-21 12:00:02.694443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.921 [2024-07-21 12:00:02.694486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:03.921 [2024-07-21 12:00:02.694504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.481 ms 00:18:03.921 [2024-07-21 12:00:02.694512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.921 [2024-07-21 12:00:02.696408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.921 [2024-07-21 12:00:02.696449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:03.921 [2024-07-21 12:00:02.696460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.839 ms 00:18:03.921 [2024-07-21 12:00:02.696468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.921 [2024-07-21 12:00:02.700911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.921 [2024-07-21 12:00:02.700952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:03.921 [2024-07-21 12:00:02.700964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.414 ms 00:18:03.921 [2024-07-21 12:00:02.700973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.921 [2024-07-21 12:00:02.701096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.921 [2024-07-21 12:00:02.701107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:03.921 [2024-07-21 12:00:02.701134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:03.921 [2024-07-21 12:00:02.701142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.921 [2024-07-21 12:00:02.703071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.921 [2024-07-21 12:00:02.703106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:03.921 [2024-07-21 12:00:02.703114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.911 ms 00:18:03.921 [2024-07-21 12:00:02.703129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.921 [2024-07-21 12:00:02.704737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.921 [2024-07-21 12:00:02.704771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:03.921 [2024-07-21 12:00:02.704779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:18:03.921 [2024-07-21 12:00:02.704785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.921 [2024-07-21 12:00:02.705920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.921 [2024-07-21 12:00:02.705951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:03.921 [2024-07-21 12:00:02.705960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.112 ms 00:18:03.921 [2024-07-21 12:00:02.705967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.921 [2024-07-21 12:00:02.707058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.921 [2024-07-21 12:00:02.707089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:03.921 [2024-07-21 12:00:02.707096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.043 ms 00:18:03.921 [2024-07-21 12:00:02.707103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.921 [2024-07-21 12:00:02.707134] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:03.921 [2024-07-21 12:00:02.707148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:03.921 [2024-07-21 12:00:02.707611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:03.922 [2024-07-21 12:00:02.707877] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:03.922 [2024-07-21 12:00:02.707895] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f51f8a-8f80-4c71-8dcb-fbdd0567b8b7 00:18:03.922 [2024-07-21 12:00:02.707915] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:03.922 [2024-07-21 12:00:02.707922] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:03.922 [2024-07-21 12:00:02.707929] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:03.922 [2024-07-21 12:00:02.707936] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:03.922 [2024-07-21 12:00:02.707942] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:03.922 [2024-07-21 12:00:02.707953] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:03.922 [2024-07-21 12:00:02.707960] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:03.922 [2024-07-21 12:00:02.707976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:03.922 [2024-07-21 12:00:02.707982] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:03.922 [2024-07-21 12:00:02.707988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.922 [2024-07-21 12:00:02.707996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:03.922 [2024-07-21 12:00:02.708004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:18:03.922 [2024-07-21 12:00:02.708013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.709692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.922 [2024-07-21 12:00:02.709712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:03.922 [2024-07-21 12:00:02.709719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.663 ms 00:18:03.922 [2024-07-21 12:00:02.709730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.709839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.922 [2024-07-21 12:00:02.709847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:03.922 [2024-07-21 12:00:02.709855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:18:03.922 [2024-07-21 12:00:02.709861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.715409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.922 [2024-07-21 12:00:02.715462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:03.922 [2024-07-21 12:00:02.715491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.922 [2024-07-21 12:00:02.715522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.715592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.922 [2024-07-21 12:00:02.715613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:03.922 [2024-07-21 12:00:02.715631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.922 [2024-07-21 12:00:02.715676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.715729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.922 [2024-07-21 12:00:02.715784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:03.922 [2024-07-21 12:00:02.715812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.922 [2024-07-21 12:00:02.715848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.715879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.922 [2024-07-21 12:00:02.715931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:03.922 [2024-07-21 12:00:02.715952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.922 [2024-07-21 12:00:02.715970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.729062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.922 [2024-07-21 12:00:02.729184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:03.922 [2024-07-21 12:00:02.729211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.922 [2024-07-21 12:00:02.729236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.737204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.922 [2024-07-21 12:00:02.737322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:03.922 [2024-07-21 12:00:02.737350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.922 [2024-07-21 12:00:02.737368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.737409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.922 [2024-07-21 12:00:02.737429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:03.922 [2024-07-21 12:00:02.737446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.922 [2024-07-21 12:00:02.737463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.737506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.922 [2024-07-21 12:00:02.737539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:03.922 [2024-07-21 12:00:02.737558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.922 [2024-07-21 12:00:02.737575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.737656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.922 [2024-07-21 12:00:02.737693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:03.922 [2024-07-21 12:00:02.737719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.922 [2024-07-21 12:00:02.737738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.737789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.922 [2024-07-21 12:00:02.737871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:03.922 [2024-07-21 12:00:02.737921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.922 [2024-07-21 12:00:02.737941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.738013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.922 [2024-07-21 12:00:02.738043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:03.922 [2024-07-21 12:00:02.738074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.922 [2024-07-21 12:00:02.738095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.738172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:03.922 [2024-07-21 12:00:02.738205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:03.922 [2024-07-21 12:00:02.738232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:03.922 [2024-07-21 12:00:02.738250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.922 [2024-07-21 12:00:02.738397] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 62.135 ms, result 0 00:18:04.179 00:18:04.179 00:18:04.179 12:00:02 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:04.744 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:04.744 12:00:03 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:04.744 12:00:03 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:04.744 12:00:03 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:04.744 12:00:03 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:04.744 12:00:03 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:04.744 12:00:03 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:04.744 12:00:03 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 90130 00:18:04.744 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@946 -- # '[' -z 90130 ']' 00:18:04.744 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@950 -- # kill -0 90130 00:18:04.744 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (90130) - No such process 00:18:04.744 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@973 -- # echo 'Process with pid 90130 is not found' 00:18:04.744 Process with pid 90130 is not found 00:18:04.744 00:18:04.744 real 0m52.745s 00:18:04.744 user 1m17.366s 00:18:04.744 sys 0m5.427s 00:18:04.744 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:04.744 12:00:03 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:04.744 ************************************ 00:18:04.744 END TEST ftl_trim 00:18:04.744 ************************************ 00:18:04.744 12:00:03 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:04.744 12:00:03 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:18:04.744 12:00:03 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:04.744 12:00:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:04.744 ************************************ 00:18:04.744 START TEST ftl_restore 00:18:04.744 ************************************ 00:18:04.744 12:00:03 ftl.ftl_restore -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:05.002 * Looking for test storage... 00:18:05.002 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.002 12:00:03 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:05.002 12:00:03 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:05.002 12:00:03 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.002 12:00:03 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:05.002 12:00:03 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:05.002 12:00:03 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:05.002 12:00:03 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:05.002 12:00:03 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:05.002 12:00:03 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.eUpHcZhkqp 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=90342 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 90342 00:18:05.003 12:00:03 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:05.003 12:00:03 ftl.ftl_restore -- common/autotest_common.sh@827 -- # '[' -z 90342 ']' 00:18:05.003 12:00:03 ftl.ftl_restore -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.003 12:00:03 ftl.ftl_restore -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:05.003 12:00:03 ftl.ftl_restore -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.003 12:00:03 ftl.ftl_restore -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:05.003 12:00:03 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:05.003 [2024-07-21 12:00:03.860923] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:05.003 [2024-07-21 12:00:03.861104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90342 ] 00:18:05.261 [2024-07-21 12:00:04.022360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.261 [2024-07-21 12:00:04.068541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.825 12:00:04 ftl.ftl_restore -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:05.825 12:00:04 ftl.ftl_restore -- common/autotest_common.sh@860 -- # return 0 00:18:05.825 12:00:04 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:05.825 12:00:04 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:05.825 12:00:04 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:05.825 12:00:04 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:05.825 12:00:04 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:05.825 12:00:04 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:06.083 12:00:04 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:06.083 12:00:04 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:06.083 12:00:04 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:06.083 12:00:04 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:18:06.083 12:00:04 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:06.083 12:00:04 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:18:06.083 12:00:04 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:18:06.083 12:00:04 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:06.341 12:00:05 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:06.341 { 00:18:06.341 "name": "nvme0n1", 00:18:06.341 "aliases": [ 00:18:06.341 "e7d39109-2608-4b95-9db2-e31531da7155" 00:18:06.341 ], 00:18:06.341 "product_name": "NVMe disk", 00:18:06.341 "block_size": 4096, 00:18:06.341 "num_blocks": 1310720, 00:18:06.341 "uuid": "e7d39109-2608-4b95-9db2-e31531da7155", 00:18:06.341 "assigned_rate_limits": { 00:18:06.341 "rw_ios_per_sec": 0, 00:18:06.341 "rw_mbytes_per_sec": 0, 00:18:06.341 "r_mbytes_per_sec": 0, 00:18:06.341 "w_mbytes_per_sec": 0 00:18:06.341 }, 00:18:06.341 "claimed": true, 00:18:06.342 "claim_type": "read_many_write_one", 00:18:06.342 "zoned": false, 00:18:06.342 "supported_io_types": { 00:18:06.342 "read": true, 00:18:06.342 "write": true, 00:18:06.342 "unmap": true, 00:18:06.342 "write_zeroes": true, 00:18:06.342 "flush": true, 00:18:06.342 "reset": true, 00:18:06.342 "compare": true, 00:18:06.342 "compare_and_write": false, 00:18:06.342 "abort": true, 00:18:06.342 "nvme_admin": true, 00:18:06.342 "nvme_io": true 00:18:06.342 }, 00:18:06.342 "driver_specific": { 00:18:06.342 "nvme": [ 00:18:06.342 { 00:18:06.342 "pci_address": "0000:00:11.0", 00:18:06.342 "trid": { 00:18:06.342 "trtype": "PCIe", 00:18:06.342 "traddr": "0000:00:11.0" 00:18:06.342 }, 00:18:06.342 "ctrlr_data": { 00:18:06.342 "cntlid": 0, 00:18:06.342 "vendor_id": "0x1b36", 00:18:06.342 "model_number": "QEMU NVMe Ctrl", 00:18:06.342 "serial_number": "12341", 00:18:06.342 "firmware_revision": "8.0.0", 00:18:06.342 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:06.342 "oacs": { 00:18:06.342 "security": 0, 00:18:06.342 "format": 1, 00:18:06.342 "firmware": 0, 00:18:06.342 "ns_manage": 1 00:18:06.342 }, 00:18:06.342 "multi_ctrlr": false, 00:18:06.342 "ana_reporting": false 00:18:06.342 }, 00:18:06.342 "vs": { 00:18:06.342 "nvme_version": "1.4" 00:18:06.342 }, 00:18:06.342 "ns_data": { 00:18:06.342 "id": 1, 00:18:06.342 "can_share": false 00:18:06.342 } 00:18:06.342 } 00:18:06.342 ], 00:18:06.342 "mp_policy": "active_passive" 00:18:06.342 } 00:18:06.342 } 00:18:06.342 ]' 00:18:06.342 12:00:05 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:06.342 12:00:05 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:18:06.342 12:00:05 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:06.342 12:00:05 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=1310720 00:18:06.342 12:00:05 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:18:06.342 12:00:05 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 5120 00:18:06.342 12:00:05 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:06.342 12:00:05 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:06.342 12:00:05 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:06.342 12:00:05 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:06.342 12:00:05 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:06.604 12:00:05 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=aa013710-5471-4c69-9975-203ea300a289 00:18:06.604 12:00:05 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:06.604 12:00:05 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u aa013710-5471-4c69-9975-203ea300a289 00:18:06.865 12:00:05 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:06.865 12:00:05 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=a5a067b8-f9dd-4672-b895-a4e16a8513d1 00:18:06.865 12:00:05 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a5a067b8-f9dd-4672-b895-a4e16a8513d1 00:18:07.124 12:00:05 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=844b238b-4189-457a-8d46-0175b9f810be 00:18:07.124 12:00:05 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:07.124 12:00:05 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 844b238b-4189-457a-8d46-0175b9f810be 00:18:07.124 12:00:05 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:07.124 12:00:05 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:07.124 12:00:05 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=844b238b-4189-457a-8d46-0175b9f810be 00:18:07.124 12:00:05 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:07.124 12:00:05 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 844b238b-4189-457a-8d46-0175b9f810be 00:18:07.124 12:00:05 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=844b238b-4189-457a-8d46-0175b9f810be 00:18:07.124 12:00:05 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:07.124 12:00:05 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:18:07.124 12:00:05 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:18:07.124 12:00:05 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 844b238b-4189-457a-8d46-0175b9f810be 00:18:07.383 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:07.383 { 00:18:07.383 "name": "844b238b-4189-457a-8d46-0175b9f810be", 00:18:07.383 "aliases": [ 00:18:07.383 "lvs/nvme0n1p0" 00:18:07.383 ], 00:18:07.383 "product_name": "Logical Volume", 00:18:07.383 "block_size": 4096, 00:18:07.383 "num_blocks": 26476544, 00:18:07.383 "uuid": "844b238b-4189-457a-8d46-0175b9f810be", 00:18:07.383 "assigned_rate_limits": { 00:18:07.383 "rw_ios_per_sec": 0, 00:18:07.383 "rw_mbytes_per_sec": 0, 00:18:07.383 "r_mbytes_per_sec": 0, 00:18:07.383 "w_mbytes_per_sec": 0 00:18:07.383 }, 00:18:07.383 "claimed": false, 00:18:07.383 "zoned": false, 00:18:07.383 "supported_io_types": { 00:18:07.383 "read": true, 00:18:07.383 "write": true, 00:18:07.383 "unmap": true, 00:18:07.383 "write_zeroes": true, 00:18:07.383 "flush": false, 00:18:07.383 "reset": true, 00:18:07.383 "compare": false, 00:18:07.383 "compare_and_write": false, 00:18:07.383 "abort": false, 00:18:07.383 "nvme_admin": false, 00:18:07.383 "nvme_io": false 00:18:07.383 }, 00:18:07.383 "driver_specific": { 00:18:07.383 "lvol": { 00:18:07.383 "lvol_store_uuid": "a5a067b8-f9dd-4672-b895-a4e16a8513d1", 00:18:07.383 "base_bdev": "nvme0n1", 00:18:07.383 "thin_provision": true, 00:18:07.383 "num_allocated_clusters": 0, 00:18:07.383 "snapshot": false, 00:18:07.383 "clone": false, 00:18:07.383 "esnap_clone": false 00:18:07.383 } 00:18:07.383 } 00:18:07.383 } 00:18:07.383 ]' 00:18:07.383 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:07.383 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:18:07.383 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:07.383 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:07.383 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:07.383 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:18:07.383 12:00:06 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:07.383 12:00:06 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:07.383 12:00:06 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:07.641 12:00:06 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:07.641 12:00:06 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:07.641 12:00:06 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 844b238b-4189-457a-8d46-0175b9f810be 00:18:07.641 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=844b238b-4189-457a-8d46-0175b9f810be 00:18:07.641 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:07.641 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:18:07.641 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:18:07.641 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 844b238b-4189-457a-8d46-0175b9f810be 00:18:07.899 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:07.899 { 00:18:07.899 "name": "844b238b-4189-457a-8d46-0175b9f810be", 00:18:07.899 "aliases": [ 00:18:07.899 "lvs/nvme0n1p0" 00:18:07.899 ], 00:18:07.899 "product_name": "Logical Volume", 00:18:07.899 "block_size": 4096, 00:18:07.899 "num_blocks": 26476544, 00:18:07.899 "uuid": "844b238b-4189-457a-8d46-0175b9f810be", 00:18:07.899 "assigned_rate_limits": { 00:18:07.899 "rw_ios_per_sec": 0, 00:18:07.899 "rw_mbytes_per_sec": 0, 00:18:07.899 "r_mbytes_per_sec": 0, 00:18:07.899 "w_mbytes_per_sec": 0 00:18:07.899 }, 00:18:07.899 "claimed": false, 00:18:07.899 "zoned": false, 00:18:07.899 "supported_io_types": { 00:18:07.899 "read": true, 00:18:07.899 "write": true, 00:18:07.899 "unmap": true, 00:18:07.899 "write_zeroes": true, 00:18:07.899 "flush": false, 00:18:07.899 "reset": true, 00:18:07.899 "compare": false, 00:18:07.899 "compare_and_write": false, 00:18:07.899 "abort": false, 00:18:07.899 "nvme_admin": false, 00:18:07.899 "nvme_io": false 00:18:07.899 }, 00:18:07.899 "driver_specific": { 00:18:07.899 "lvol": { 00:18:07.899 "lvol_store_uuid": "a5a067b8-f9dd-4672-b895-a4e16a8513d1", 00:18:07.899 "base_bdev": "nvme0n1", 00:18:07.899 "thin_provision": true, 00:18:07.899 "num_allocated_clusters": 0, 00:18:07.899 "snapshot": false, 00:18:07.899 "clone": false, 00:18:07.899 "esnap_clone": false 00:18:07.899 } 00:18:07.899 } 00:18:07.899 } 00:18:07.899 ]' 00:18:07.899 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:07.899 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:18:07.899 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:07.899 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:07.899 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:07.899 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:18:07.899 12:00:06 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:07.899 12:00:06 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:08.158 12:00:06 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:08.158 12:00:06 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 844b238b-4189-457a-8d46-0175b9f810be 00:18:08.158 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1374 -- # local bdev_name=844b238b-4189-457a-8d46-0175b9f810be 00:18:08.158 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1375 -- # local bdev_info 00:18:08.158 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1376 -- # local bs 00:18:08.158 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1377 -- # local nb 00:18:08.158 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 844b238b-4189-457a-8d46-0175b9f810be 00:18:08.159 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:18:08.159 { 00:18:08.159 "name": "844b238b-4189-457a-8d46-0175b9f810be", 00:18:08.159 "aliases": [ 00:18:08.159 "lvs/nvme0n1p0" 00:18:08.159 ], 00:18:08.159 "product_name": "Logical Volume", 00:18:08.159 "block_size": 4096, 00:18:08.159 "num_blocks": 26476544, 00:18:08.159 "uuid": "844b238b-4189-457a-8d46-0175b9f810be", 00:18:08.159 "assigned_rate_limits": { 00:18:08.159 "rw_ios_per_sec": 0, 00:18:08.159 "rw_mbytes_per_sec": 0, 00:18:08.159 "r_mbytes_per_sec": 0, 00:18:08.159 "w_mbytes_per_sec": 0 00:18:08.159 }, 00:18:08.159 "claimed": false, 00:18:08.159 "zoned": false, 00:18:08.159 "supported_io_types": { 00:18:08.159 "read": true, 00:18:08.159 "write": true, 00:18:08.159 "unmap": true, 00:18:08.159 "write_zeroes": true, 00:18:08.159 "flush": false, 00:18:08.159 "reset": true, 00:18:08.159 "compare": false, 00:18:08.159 "compare_and_write": false, 00:18:08.159 "abort": false, 00:18:08.159 "nvme_admin": false, 00:18:08.159 "nvme_io": false 00:18:08.159 }, 00:18:08.159 "driver_specific": { 00:18:08.159 "lvol": { 00:18:08.159 "lvol_store_uuid": "a5a067b8-f9dd-4672-b895-a4e16a8513d1", 00:18:08.159 "base_bdev": "nvme0n1", 00:18:08.159 "thin_provision": true, 00:18:08.159 "num_allocated_clusters": 0, 00:18:08.159 "snapshot": false, 00:18:08.159 "clone": false, 00:18:08.159 "esnap_clone": false 00:18:08.159 } 00:18:08.159 } 00:18:08.159 } 00:18:08.159 ]' 00:18:08.159 12:00:06 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:18:08.419 12:00:07 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # bs=4096 00:18:08.419 12:00:07 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:18:08.419 12:00:07 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # nb=26476544 00:18:08.419 12:00:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:18:08.419 12:00:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # echo 103424 00:18:08.419 12:00:07 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:08.419 12:00:07 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 844b238b-4189-457a-8d46-0175b9f810be --l2p_dram_limit 10' 00:18:08.419 12:00:07 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:08.419 12:00:07 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:08.419 12:00:07 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:08.419 12:00:07 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:08.419 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:08.419 12:00:07 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 844b238b-4189-457a-8d46-0175b9f810be --l2p_dram_limit 10 -c nvc0n1p0 00:18:08.419 [2024-07-21 12:00:07.256489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.419 [2024-07-21 12:00:07.256537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:08.419 [2024-07-21 12:00:07.256551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:08.419 [2024-07-21 12:00:07.256558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.419 [2024-07-21 12:00:07.256623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.419 [2024-07-21 12:00:07.256632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:08.419 [2024-07-21 12:00:07.256641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:08.419 [2024-07-21 12:00:07.256649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.419 [2024-07-21 12:00:07.256674] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:08.419 [2024-07-21 12:00:07.256991] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:08.419 [2024-07-21 12:00:07.257023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.419 [2024-07-21 12:00:07.257031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:08.419 [2024-07-21 12:00:07.257043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.357 ms 00:18:08.419 [2024-07-21 12:00:07.257052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.419 [2024-07-21 12:00:07.257143] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 470802e4-3beb-44b2-b990-f45a7745233c 00:18:08.419 [2024-07-21 12:00:07.258509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.419 [2024-07-21 12:00:07.258538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:08.419 [2024-07-21 12:00:07.258558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:08.419 [2024-07-21 12:00:07.258573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.419 [2024-07-21 12:00:07.265964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.419 [2024-07-21 12:00:07.266000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:08.419 [2024-07-21 12:00:07.266024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.365 ms 00:18:08.419 [2024-07-21 12:00:07.266036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.419 [2024-07-21 12:00:07.266114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.420 [2024-07-21 12:00:07.266132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:08.420 [2024-07-21 12:00:07.266140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:08.420 [2024-07-21 12:00:07.266149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.420 [2024-07-21 12:00:07.266215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.420 [2024-07-21 12:00:07.266232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:08.420 [2024-07-21 12:00:07.266240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:08.420 [2024-07-21 12:00:07.266249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.420 [2024-07-21 12:00:07.266272] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:08.420 [2024-07-21 12:00:07.267941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.420 [2024-07-21 12:00:07.267970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:08.420 [2024-07-21 12:00:07.267980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.678 ms 00:18:08.420 [2024-07-21 12:00:07.267988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.420 [2024-07-21 12:00:07.268020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.420 [2024-07-21 12:00:07.268027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:08.420 [2024-07-21 12:00:07.268036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:08.420 [2024-07-21 12:00:07.268043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.420 [2024-07-21 12:00:07.268077] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:08.420 [2024-07-21 12:00:07.268192] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:08.420 [2024-07-21 12:00:07.268206] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:08.420 [2024-07-21 12:00:07.268216] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:08.420 [2024-07-21 12:00:07.268227] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:08.420 [2024-07-21 12:00:07.268243] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:08.420 [2024-07-21 12:00:07.268252] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:08.420 [2024-07-21 12:00:07.268260] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:08.420 [2024-07-21 12:00:07.268270] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:08.420 [2024-07-21 12:00:07.268277] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:08.420 [2024-07-21 12:00:07.268286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.420 [2024-07-21 12:00:07.268299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:08.420 [2024-07-21 12:00:07.268319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:18:08.420 [2024-07-21 12:00:07.268326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.420 [2024-07-21 12:00:07.268398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.420 [2024-07-21 12:00:07.268406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:08.420 [2024-07-21 12:00:07.268417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:08.420 [2024-07-21 12:00:07.268424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.420 [2024-07-21 12:00:07.268508] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:08.420 [2024-07-21 12:00:07.268528] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:08.420 [2024-07-21 12:00:07.268539] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:08.420 [2024-07-21 12:00:07.268546] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.420 [2024-07-21 12:00:07.268555] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:08.420 [2024-07-21 12:00:07.268561] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:08.420 [2024-07-21 12:00:07.268571] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:08.420 [2024-07-21 12:00:07.268578] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:08.420 [2024-07-21 12:00:07.268586] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:08.420 [2024-07-21 12:00:07.268592] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:08.420 [2024-07-21 12:00:07.268600] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:08.420 [2024-07-21 12:00:07.268606] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:08.420 [2024-07-21 12:00:07.268614] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:08.420 [2024-07-21 12:00:07.268620] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:08.420 [2024-07-21 12:00:07.268629] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:08.420 [2024-07-21 12:00:07.268635] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.420 [2024-07-21 12:00:07.268642] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:08.420 [2024-07-21 12:00:07.268648] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:08.420 [2024-07-21 12:00:07.268656] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.420 [2024-07-21 12:00:07.268662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:08.420 [2024-07-21 12:00:07.268669] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:08.420 [2024-07-21 12:00:07.268675] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.420 [2024-07-21 12:00:07.268682] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:08.420 [2024-07-21 12:00:07.268687] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:08.420 [2024-07-21 12:00:07.268696] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.420 [2024-07-21 12:00:07.268701] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:08.420 [2024-07-21 12:00:07.268708] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:08.420 [2024-07-21 12:00:07.268714] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.420 [2024-07-21 12:00:07.268721] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:08.420 [2024-07-21 12:00:07.268726] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:08.420 [2024-07-21 12:00:07.268736] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:08.420 [2024-07-21 12:00:07.268741] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:08.420 [2024-07-21 12:00:07.268749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:08.420 [2024-07-21 12:00:07.268754] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:08.420 [2024-07-21 12:00:07.268762] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:08.420 [2024-07-21 12:00:07.268767] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:08.420 [2024-07-21 12:00:07.268774] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:08.420 [2024-07-21 12:00:07.268779] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:08.420 [2024-07-21 12:00:07.268787] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:08.420 [2024-07-21 12:00:07.268793] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.420 [2024-07-21 12:00:07.268800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:08.420 [2024-07-21 12:00:07.268806] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:08.420 [2024-07-21 12:00:07.268813] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.420 [2024-07-21 12:00:07.268834] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:08.420 [2024-07-21 12:00:07.268843] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:08.420 [2024-07-21 12:00:07.268850] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:08.420 [2024-07-21 12:00:07.268861] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:08.420 [2024-07-21 12:00:07.268870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:08.420 [2024-07-21 12:00:07.268879] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:08.420 [2024-07-21 12:00:07.268885] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:08.420 [2024-07-21 12:00:07.268893] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:08.420 [2024-07-21 12:00:07.268899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:08.420 [2024-07-21 12:00:07.268907] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:08.420 [2024-07-21 12:00:07.268916] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:08.420 [2024-07-21 12:00:07.268932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:08.420 [2024-07-21 12:00:07.268941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:08.420 [2024-07-21 12:00:07.268952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:08.420 [2024-07-21 12:00:07.268958] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:08.420 [2024-07-21 12:00:07.268967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:08.420 [2024-07-21 12:00:07.268974] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:08.420 [2024-07-21 12:00:07.268981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:08.420 [2024-07-21 12:00:07.268987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:08.420 [2024-07-21 12:00:07.269001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:08.420 [2024-07-21 12:00:07.269007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:08.420 [2024-07-21 12:00:07.269016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:08.420 [2024-07-21 12:00:07.269022] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:08.420 [2024-07-21 12:00:07.269030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:08.420 [2024-07-21 12:00:07.269037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:08.420 [2024-07-21 12:00:07.269045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:08.420 [2024-07-21 12:00:07.269051] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:08.420 [2024-07-21 12:00:07.269061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:08.420 [2024-07-21 12:00:07.269068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:08.420 [2024-07-21 12:00:07.269076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:08.420 [2024-07-21 12:00:07.269083] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:08.420 [2024-07-21 12:00:07.269091] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:08.420 [2024-07-21 12:00:07.269099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.420 [2024-07-21 12:00:07.269108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:08.420 [2024-07-21 12:00:07.269121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:18:08.420 [2024-07-21 12:00:07.269132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.420 [2024-07-21 12:00:07.269167] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:08.420 [2024-07-21 12:00:07.269177] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:13.691 [2024-07-21 12:00:11.866372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.691 [2024-07-21 12:00:11.866439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:13.691 [2024-07-21 12:00:11.866453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4606.065 ms 00:18:13.691 [2024-07-21 12:00:11.866462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:11.877508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:11.877577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.692 [2024-07-21 12:00:11.877591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.980 ms 00:18:13.692 [2024-07-21 12:00:11.877601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:11.877690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:11.877704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:13.692 [2024-07-21 12:00:11.877712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:18:13.692 [2024-07-21 12:00:11.877723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:11.887334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:11.887378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.692 [2024-07-21 12:00:11.887388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.573 ms 00:18:13.692 [2024-07-21 12:00:11.887398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:11.887441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:11.887453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.692 [2024-07-21 12:00:11.887461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:13.692 [2024-07-21 12:00:11.887471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:11.887923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:11.887945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.692 [2024-07-21 12:00:11.887954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:18:13.692 [2024-07-21 12:00:11.887962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:11.888046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:11.888063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.692 [2024-07-21 12:00:11.888071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:13.692 [2024-07-21 12:00:11.888081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:11.894894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:11.894942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.692 [2024-07-21 12:00:11.894953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.794 ms 00:18:13.692 [2024-07-21 12:00:11.894962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:11.902106] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:13.692 [2024-07-21 12:00:11.905220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:11.905259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:13.692 [2024-07-21 12:00:11.905273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.212 ms 00:18:13.692 [2024-07-21 12:00:11.905281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.024607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.024667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:13.692 [2024-07-21 12:00:12.024685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 119.519 ms 00:18:13.692 [2024-07-21 12:00:12.024693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.024869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.024881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:13.692 [2024-07-21 12:00:12.024890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:18:13.692 [2024-07-21 12:00:12.024897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.029102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.029143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:13.692 [2024-07-21 12:00:12.029157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.189 ms 00:18:13.692 [2024-07-21 12:00:12.029167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.032100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.032134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:13.692 [2024-07-21 12:00:12.032146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.912 ms 00:18:13.692 [2024-07-21 12:00:12.032154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.032420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.032441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:13.692 [2024-07-21 12:00:12.032451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:18:13.692 [2024-07-21 12:00:12.032459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.096094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.096151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:13.692 [2024-07-21 12:00:12.096168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.705 ms 00:18:13.692 [2024-07-21 12:00:12.096178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.101073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.101109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:13.692 [2024-07-21 12:00:12.101129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.875 ms 00:18:13.692 [2024-07-21 12:00:12.101138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.104257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.104289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:13.692 [2024-07-21 12:00:12.104300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.099 ms 00:18:13.692 [2024-07-21 12:00:12.104307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.107625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.107659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:13.692 [2024-07-21 12:00:12.107671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.299 ms 00:18:13.692 [2024-07-21 12:00:12.107679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.107708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.107716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:13.692 [2024-07-21 12:00:12.107727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:13.692 [2024-07-21 12:00:12.107741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.107809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.107829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:13.692 [2024-07-21 12:00:12.107840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:13.692 [2024-07-21 12:00:12.107848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.108805] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4861.274 ms, result 0 00:18:13.692 { 00:18:13.692 "name": "ftl0", 00:18:13.692 "uuid": "470802e4-3beb-44b2-b990-f45a7745233c" 00:18:13.692 } 00:18:13.692 12:00:12 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:13.692 12:00:12 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:13.692 12:00:12 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:13.692 12:00:12 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:13.692 [2024-07-21 12:00:12.498482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.498547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:13.692 [2024-07-21 12:00:12.498560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:13.692 [2024-07-21 12:00:12.498572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.498597] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:13.692 [2024-07-21 12:00:12.499290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.499313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:13.692 [2024-07-21 12:00:12.499326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:18:13.692 [2024-07-21 12:00:12.499333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.499558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.499574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:13.692 [2024-07-21 12:00:12.499585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:18:13.692 [2024-07-21 12:00:12.499593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.501953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.501978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:13.692 [2024-07-21 12:00:12.501988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.347 ms 00:18:13.692 [2024-07-21 12:00:12.501995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.506675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.506725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:13.692 [2024-07-21 12:00:12.506739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.669 ms 00:18:13.692 [2024-07-21 12:00:12.506746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.508226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.508263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:13.692 [2024-07-21 12:00:12.508278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:18:13.692 [2024-07-21 12:00:12.508285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.514871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.514908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:13.692 [2024-07-21 12:00:12.514919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.563 ms 00:18:13.692 [2024-07-21 12:00:12.514927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.515046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.515059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:13.692 [2024-07-21 12:00:12.515070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:13.692 [2024-07-21 12:00:12.515080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.692 [2024-07-21 12:00:12.517116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.692 [2024-07-21 12:00:12.517150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:13.693 [2024-07-21 12:00:12.517161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.009 ms 00:18:13.693 [2024-07-21 12:00:12.517168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.693 [2024-07-21 12:00:12.518566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.693 [2024-07-21 12:00:12.518598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:13.693 [2024-07-21 12:00:12.518611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:18:13.693 [2024-07-21 12:00:12.518618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.693 [2024-07-21 12:00:12.519671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.693 [2024-07-21 12:00:12.519704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:13.693 [2024-07-21 12:00:12.519715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:18:13.693 [2024-07-21 12:00:12.519722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.693 [2024-07-21 12:00:12.520746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.693 [2024-07-21 12:00:12.520777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:13.693 [2024-07-21 12:00:12.520788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:18:13.693 [2024-07-21 12:00:12.520795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.693 [2024-07-21 12:00:12.520834] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:13.693 [2024-07-21 12:00:12.520851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.520997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:13.693 [2024-07-21 12:00:12.521572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:13.694 [2024-07-21 12:00:12.521731] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:13.694 [2024-07-21 12:00:12.521741] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 470802e4-3beb-44b2-b990-f45a7745233c 00:18:13.694 [2024-07-21 12:00:12.521749] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:13.694 [2024-07-21 12:00:12.521757] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:13.694 [2024-07-21 12:00:12.521763] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:13.694 [2024-07-21 12:00:12.521779] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:13.694 [2024-07-21 12:00:12.521785] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:13.694 [2024-07-21 12:00:12.521795] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:13.694 [2024-07-21 12:00:12.521804] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:13.694 [2024-07-21 12:00:12.521813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:13.694 [2024-07-21 12:00:12.521818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:13.694 [2024-07-21 12:00:12.521827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.694 [2024-07-21 12:00:12.521833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:13.694 [2024-07-21 12:00:12.521854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.997 ms 00:18:13.694 [2024-07-21 12:00:12.521862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.523566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.694 [2024-07-21 12:00:12.523588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:13.694 [2024-07-21 12:00:12.523601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.686 ms 00:18:13.694 [2024-07-21 12:00:12.523615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.523720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.694 [2024-07-21 12:00:12.523751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:13.694 [2024-07-21 12:00:12.523760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:13.694 [2024-07-21 12:00:12.523767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.530047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.694 [2024-07-21 12:00:12.530071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.694 [2024-07-21 12:00:12.530084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.694 [2024-07-21 12:00:12.530095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.530145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.694 [2024-07-21 12:00:12.530153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.694 [2024-07-21 12:00:12.530173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.694 [2024-07-21 12:00:12.530180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.530243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.694 [2024-07-21 12:00:12.530253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.694 [2024-07-21 12:00:12.530264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.694 [2024-07-21 12:00:12.530278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.530298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.694 [2024-07-21 12:00:12.530306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.694 [2024-07-21 12:00:12.530315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.694 [2024-07-21 12:00:12.530322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.543161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.694 [2024-07-21 12:00:12.543203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.694 [2024-07-21 12:00:12.543215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.694 [2024-07-21 12:00:12.543222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.551198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.694 [2024-07-21 12:00:12.551231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.694 [2024-07-21 12:00:12.551243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.694 [2024-07-21 12:00:12.551251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.551319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.694 [2024-07-21 12:00:12.551329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.694 [2024-07-21 12:00:12.551342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.694 [2024-07-21 12:00:12.551350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.551385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.694 [2024-07-21 12:00:12.551397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.694 [2024-07-21 12:00:12.551406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.694 [2024-07-21 12:00:12.551414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.551488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.694 [2024-07-21 12:00:12.551506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.694 [2024-07-21 12:00:12.551515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.694 [2024-07-21 12:00:12.551523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.551560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.694 [2024-07-21 12:00:12.551571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:13.694 [2024-07-21 12:00:12.551582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.694 [2024-07-21 12:00:12.551589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.551631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.694 [2024-07-21 12:00:12.551640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.694 [2024-07-21 12:00:12.551652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.694 [2024-07-21 12:00:12.551659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.551703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.694 [2024-07-21 12:00:12.551713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.694 [2024-07-21 12:00:12.551731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.694 [2024-07-21 12:00:12.551745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.694 [2024-07-21 12:00:12.551899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.480 ms, result 0 00:18:13.953 true 00:18:13.953 12:00:12 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 90342 00:18:13.953 12:00:12 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 90342 ']' 00:18:13.953 12:00:12 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 90342 00:18:13.953 12:00:12 ftl.ftl_restore -- common/autotest_common.sh@951 -- # uname 00:18:13.953 12:00:12 ftl.ftl_restore -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:13.953 12:00:12 ftl.ftl_restore -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 90342 00:18:13.953 killing process with pid 90342 00:18:13.953 12:00:12 ftl.ftl_restore -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:13.953 12:00:12 ftl.ftl_restore -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:13.953 12:00:12 ftl.ftl_restore -- common/autotest_common.sh@964 -- # echo 'killing process with pid 90342' 00:18:13.953 12:00:12 ftl.ftl_restore -- common/autotest_common.sh@965 -- # kill 90342 00:18:13.953 12:00:12 ftl.ftl_restore -- common/autotest_common.sh@970 -- # wait 90342 00:18:19.248 12:00:17 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:22.531 262144+0 records in 00:18:22.531 262144+0 records out 00:18:22.531 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.20391 s, 335 MB/s 00:18:22.531 12:00:20 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:23.907 12:00:22 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:23.907 [2024-07-21 12:00:22.667342] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:18:23.907 [2024-07-21 12:00:22.667494] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90604 ] 00:18:24.164 [2024-07-21 12:00:22.838391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.164 [2024-07-21 12:00:22.885293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.164 [2024-07-21 12:00:22.987105] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:24.164 [2024-07-21 12:00:22.987184] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:24.424 [2024-07-21 12:00:23.133233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.425 [2024-07-21 12:00:23.133278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:24.425 [2024-07-21 12:00:23.133293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:24.425 [2024-07-21 12:00:23.133307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.425 [2024-07-21 12:00:23.133360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.425 [2024-07-21 12:00:23.133372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:24.425 [2024-07-21 12:00:23.133391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:24.425 [2024-07-21 12:00:23.133402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.425 [2024-07-21 12:00:23.133427] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:24.425 [2024-07-21 12:00:23.133635] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:24.425 [2024-07-21 12:00:23.133659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.425 [2024-07-21 12:00:23.133670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:24.425 [2024-07-21 12:00:23.133678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:18:24.425 [2024-07-21 12:00:23.133693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.425 [2024-07-21 12:00:23.135102] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:24.425 [2024-07-21 12:00:23.137543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.425 [2024-07-21 12:00:23.137581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:24.425 [2024-07-21 12:00:23.137594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.463 ms 00:18:24.425 [2024-07-21 12:00:23.137602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.425 [2024-07-21 12:00:23.137659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.425 [2024-07-21 12:00:23.137669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:24.425 [2024-07-21 12:00:23.137678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:18:24.425 [2024-07-21 12:00:23.137685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.425 [2024-07-21 12:00:23.144493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.425 [2024-07-21 12:00:23.144522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:24.425 [2024-07-21 12:00:23.144532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.777 ms 00:18:24.425 [2024-07-21 12:00:23.144539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.425 [2024-07-21 12:00:23.144620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.425 [2024-07-21 12:00:23.144631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:24.425 [2024-07-21 12:00:23.144639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:24.425 [2024-07-21 12:00:23.144646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.425 [2024-07-21 12:00:23.144695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.425 [2024-07-21 12:00:23.144707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:24.425 [2024-07-21 12:00:23.144719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:24.425 [2024-07-21 12:00:23.144726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.425 [2024-07-21 12:00:23.144750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:24.425 [2024-07-21 12:00:23.146367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.425 [2024-07-21 12:00:23.146400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:24.425 [2024-07-21 12:00:23.146421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.628 ms 00:18:24.425 [2024-07-21 12:00:23.146435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.425 [2024-07-21 12:00:23.146465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.425 [2024-07-21 12:00:23.146474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:24.425 [2024-07-21 12:00:23.146484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:24.425 [2024-07-21 12:00:23.146491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.425 [2024-07-21 12:00:23.146510] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:24.425 [2024-07-21 12:00:23.146529] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:24.425 [2024-07-21 12:00:23.146570] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:24.425 [2024-07-21 12:00:23.146586] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:18:24.425 [2024-07-21 12:00:23.146664] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:24.425 [2024-07-21 12:00:23.146677] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:24.425 [2024-07-21 12:00:23.146689] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:18:24.425 [2024-07-21 12:00:23.146706] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:24.425 [2024-07-21 12:00:23.146714] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:24.425 [2024-07-21 12:00:23.146729] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:24.425 [2024-07-21 12:00:23.146744] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:24.425 [2024-07-21 12:00:23.146751] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:24.425 [2024-07-21 12:00:23.146759] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:24.425 [2024-07-21 12:00:23.146766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.425 [2024-07-21 12:00:23.146774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:24.425 [2024-07-21 12:00:23.146782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:18:24.425 [2024-07-21 12:00:23.146792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.425 [2024-07-21 12:00:23.146884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.425 [2024-07-21 12:00:23.146894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:24.425 [2024-07-21 12:00:23.146901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:24.425 [2024-07-21 12:00:23.146910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.425 [2024-07-21 12:00:23.146996] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:24.425 [2024-07-21 12:00:23.147007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:24.425 [2024-07-21 12:00:23.147015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.425 [2024-07-21 12:00:23.147022] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.425 [2024-07-21 12:00:23.147033] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:24.425 [2024-07-21 12:00:23.147039] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:24.425 [2024-07-21 12:00:23.147046] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:24.425 [2024-07-21 12:00:23.147053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:24.425 [2024-07-21 12:00:23.147059] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:24.425 [2024-07-21 12:00:23.147066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.425 [2024-07-21 12:00:23.147072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:24.425 [2024-07-21 12:00:23.147079] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:24.425 [2024-07-21 12:00:23.147095] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:24.425 [2024-07-21 12:00:23.147101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:24.425 [2024-07-21 12:00:23.147108] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:24.425 [2024-07-21 12:00:23.147115] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.425 [2024-07-21 12:00:23.147124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:24.425 [2024-07-21 12:00:23.147138] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:24.425 [2024-07-21 12:00:23.147144] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.425 [2024-07-21 12:00:23.147150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:24.425 [2024-07-21 12:00:23.147157] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:24.425 [2024-07-21 12:00:23.147163] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.425 [2024-07-21 12:00:23.147169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:24.425 [2024-07-21 12:00:23.147175] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:24.425 [2024-07-21 12:00:23.147182] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.425 [2024-07-21 12:00:23.147189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:24.425 [2024-07-21 12:00:23.147196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:24.425 [2024-07-21 12:00:23.147202] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.425 [2024-07-21 12:00:23.147209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:24.425 [2024-07-21 12:00:23.147215] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:24.425 [2024-07-21 12:00:23.147221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:24.425 [2024-07-21 12:00:23.147228] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:24.425 [2024-07-21 12:00:23.147238] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:24.425 [2024-07-21 12:00:23.147245] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.425 [2024-07-21 12:00:23.147251] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:24.425 [2024-07-21 12:00:23.147256] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:24.425 [2024-07-21 12:00:23.147264] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:24.425 [2024-07-21 12:00:23.147270] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:24.425 [2024-07-21 12:00:23.147276] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:24.425 [2024-07-21 12:00:23.147281] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.425 [2024-07-21 12:00:23.147287] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:24.425 [2024-07-21 12:00:23.147294] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:24.425 [2024-07-21 12:00:23.147300] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.425 [2024-07-21 12:00:23.147305] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:24.425 [2024-07-21 12:00:23.147313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:24.425 [2024-07-21 12:00:23.147319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:24.425 [2024-07-21 12:00:23.147327] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:24.426 [2024-07-21 12:00:23.147334] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:24.426 [2024-07-21 12:00:23.147342] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:24.426 [2024-07-21 12:00:23.147348] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:24.426 [2024-07-21 12:00:23.147354] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:24.426 [2024-07-21 12:00:23.147360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:24.426 [2024-07-21 12:00:23.147366] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:24.426 [2024-07-21 12:00:23.147374] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:24.426 [2024-07-21 12:00:23.147381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.426 [2024-07-21 12:00:23.147389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:24.426 [2024-07-21 12:00:23.147396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:24.426 [2024-07-21 12:00:23.147403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:24.426 [2024-07-21 12:00:23.147410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:24.426 [2024-07-21 12:00:23.147417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:24.426 [2024-07-21 12:00:23.147424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:24.426 [2024-07-21 12:00:23.147430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:24.426 [2024-07-21 12:00:23.147437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:24.426 [2024-07-21 12:00:23.147444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:24.426 [2024-07-21 12:00:23.147453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:24.426 [2024-07-21 12:00:23.147461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:24.426 [2024-07-21 12:00:23.147469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:24.426 [2024-07-21 12:00:23.147476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:24.426 [2024-07-21 12:00:23.147484] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:24.426 [2024-07-21 12:00:23.147491] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:24.426 [2024-07-21 12:00:23.147498] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:24.426 [2024-07-21 12:00:23.147505] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:24.426 [2024-07-21 12:00:23.147511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:24.426 [2024-07-21 12:00:23.147518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:24.426 [2024-07-21 12:00:23.147525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:24.426 [2024-07-21 12:00:23.147532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.147539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:24.426 [2024-07-21 12:00:23.147550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:18:24.426 [2024-07-21 12:00:23.147557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.173332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.173372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:24.426 [2024-07-21 12:00:23.173385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.775 ms 00:18:24.426 [2024-07-21 12:00:23.173411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.173507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.173519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:24.426 [2024-07-21 12:00:23.173529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:24.426 [2024-07-21 12:00:23.173541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.184755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.184798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:24.426 [2024-07-21 12:00:23.184814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.160 ms 00:18:24.426 [2024-07-21 12:00:23.184855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.184923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.184938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:24.426 [2024-07-21 12:00:23.184952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:24.426 [2024-07-21 12:00:23.184968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.185482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.185507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:24.426 [2024-07-21 12:00:23.185521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.451 ms 00:18:24.426 [2024-07-21 12:00:23.185533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.185694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.185724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:24.426 [2024-07-21 12:00:23.185738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:18:24.426 [2024-07-21 12:00:23.185749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.191827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.191866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:24.426 [2024-07-21 12:00:23.191886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.034 ms 00:18:24.426 [2024-07-21 12:00:23.191895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.194498] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:24.426 [2024-07-21 12:00:23.194532] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:24.426 [2024-07-21 12:00:23.194545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.194555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:24.426 [2024-07-21 12:00:23.194565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.563 ms 00:18:24.426 [2024-07-21 12:00:23.194573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.206917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.206955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:24.426 [2024-07-21 12:00:23.206965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.327 ms 00:18:24.426 [2024-07-21 12:00:23.206972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.208794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.208836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:24.426 [2024-07-21 12:00:23.208845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.775 ms 00:18:24.426 [2024-07-21 12:00:23.208853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.210418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.210446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:24.426 [2024-07-21 12:00:23.210455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.536 ms 00:18:24.426 [2024-07-21 12:00:23.210463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.210727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.210747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:24.426 [2024-07-21 12:00:23.210756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:18:24.426 [2024-07-21 12:00:23.210764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.230608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.230662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:24.426 [2024-07-21 12:00:23.230675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.858 ms 00:18:24.426 [2024-07-21 12:00:23.230683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.236754] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:24.426 [2024-07-21 12:00:23.239777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.239808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:24.426 [2024-07-21 12:00:23.239824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.049 ms 00:18:24.426 [2024-07-21 12:00:23.239832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.239889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.239899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:24.426 [2024-07-21 12:00:23.239907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:24.426 [2024-07-21 12:00:23.239914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.239986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.240004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:24.426 [2024-07-21 12:00:23.240024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:24.426 [2024-07-21 12:00:23.240032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.240051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.240059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:24.426 [2024-07-21 12:00:23.240066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:24.426 [2024-07-21 12:00:23.240073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.240111] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:24.426 [2024-07-21 12:00:23.240120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.240127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:24.426 [2024-07-21 12:00:23.240135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:24.426 [2024-07-21 12:00:23.240145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.426 [2024-07-21 12:00:23.243834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.426 [2024-07-21 12:00:23.243864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:24.427 [2024-07-21 12:00:23.243874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.672 ms 00:18:24.427 [2024-07-21 12:00:23.243882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.427 [2024-07-21 12:00:23.243941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:24.427 [2024-07-21 12:00:23.243951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:24.427 [2024-07-21 12:00:23.243958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:18:24.427 [2024-07-21 12:00:23.243965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:24.427 [2024-07-21 12:00:23.244974] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.528 ms, result 0 00:19:01.365  Copying: 27/1024 [MB] (27 MBps) Copying: 54/1024 [MB] (27 MBps) Copying: 82/1024 [MB] (27 MBps) Copying: 109/1024 [MB] (27 MBps) Copying: 137/1024 [MB] (27 MBps) Copying: 164/1024 [MB] (26 MBps) Copying: 190/1024 [MB] (26 MBps) Copying: 217/1024 [MB] (26 MBps) Copying: 245/1024 [MB] (27 MBps) Copying: 273/1024 [MB] (28 MBps) Copying: 301/1024 [MB] (28 MBps) Copying: 329/1024 [MB] (28 MBps) Copying: 357/1024 [MB] (28 MBps) Copying: 385/1024 [MB] (27 MBps) Copying: 412/1024 [MB] (27 MBps) Copying: 440/1024 [MB] (27 MBps) Copying: 467/1024 [MB] (27 MBps) Copying: 495/1024 [MB] (27 MBps) Copying: 522/1024 [MB] (27 MBps) Copying: 550/1024 [MB] (28 MBps) Copying: 578/1024 [MB] (27 MBps) Copying: 605/1024 [MB] (27 MBps) Copying: 633/1024 [MB] (27 MBps) Copying: 661/1024 [MB] (28 MBps) Copying: 689/1024 [MB] (27 MBps) Copying: 717/1024 [MB] (28 MBps) Copying: 745/1024 [MB] (28 MBps) Copying: 774/1024 [MB] (28 MBps) Copying: 802/1024 [MB] (28 MBps) Copying: 830/1024 [MB] (28 MBps) Copying: 859/1024 [MB] (28 MBps) Copying: 887/1024 [MB] (28 MBps) Copying: 915/1024 [MB] (28 MBps) Copying: 944/1024 [MB] (28 MBps) Copying: 972/1024 [MB] (28 MBps) Copying: 1000/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-21 12:01:00.033531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.365 [2024-07-21 12:01:00.033588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:01.365 [2024-07-21 12:01:00.033602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:01.365 [2024-07-21 12:01:00.033615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.365 [2024-07-21 12:01:00.033634] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:01.365 [2024-07-21 12:01:00.034336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.365 [2024-07-21 12:01:00.034357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:01.365 [2024-07-21 12:01:00.034365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:19:01.365 [2024-07-21 12:01:00.034372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.365 [2024-07-21 12:01:00.036256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.365 [2024-07-21 12:01:00.036308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:01.365 [2024-07-21 12:01:00.036319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.870 ms 00:19:01.365 [2024-07-21 12:01:00.036332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.365 [2024-07-21 12:01:00.053179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.365 [2024-07-21 12:01:00.053216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:01.365 [2024-07-21 12:01:00.053238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.862 ms 00:19:01.365 [2024-07-21 12:01:00.053245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.365 [2024-07-21 12:01:00.058065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.365 [2024-07-21 12:01:00.058095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:01.365 [2024-07-21 12:01:00.058104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.799 ms 00:19:01.365 [2024-07-21 12:01:00.058110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.365 [2024-07-21 12:01:00.059439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.365 [2024-07-21 12:01:00.059474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:01.365 [2024-07-21 12:01:00.059489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:19:01.365 [2024-07-21 12:01:00.059496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.365 [2024-07-21 12:01:00.063252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.365 [2024-07-21 12:01:00.063289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:01.365 [2024-07-21 12:01:00.063299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.740 ms 00:19:01.366 [2024-07-21 12:01:00.063306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.366 [2024-07-21 12:01:00.063413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.366 [2024-07-21 12:01:00.063428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:01.366 [2024-07-21 12:01:00.063437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:01.366 [2024-07-21 12:01:00.063487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.366 [2024-07-21 12:01:00.065616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.366 [2024-07-21 12:01:00.065648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:01.366 [2024-07-21 12:01:00.065657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.110 ms 00:19:01.366 [2024-07-21 12:01:00.065664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.366 [2024-07-21 12:01:00.067028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.366 [2024-07-21 12:01:00.067059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:01.366 [2024-07-21 12:01:00.067068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:19:01.366 [2024-07-21 12:01:00.067075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.366 [2024-07-21 12:01:00.068196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.366 [2024-07-21 12:01:00.068227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:01.366 [2024-07-21 12:01:00.068235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.101 ms 00:19:01.366 [2024-07-21 12:01:00.068242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.366 [2024-07-21 12:01:00.069233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.366 [2024-07-21 12:01:00.069262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:01.366 [2024-07-21 12:01:00.069271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:19:01.366 [2024-07-21 12:01:00.069278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.366 [2024-07-21 12:01:00.069299] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:01.366 [2024-07-21 12:01:00.069314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:01.366 [2024-07-21 12:01:00.069872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.069993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.070001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.070010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.070016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.070023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.070030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.070037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.070044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:01.367 [2024-07-21 12:01:00.070058] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:01.367 [2024-07-21 12:01:00.070071] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 470802e4-3beb-44b2-b990-f45a7745233c 00:19:01.367 [2024-07-21 12:01:00.070080] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:01.367 [2024-07-21 12:01:00.070098] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:01.367 [2024-07-21 12:01:00.070105] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:01.367 [2024-07-21 12:01:00.070112] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:01.367 [2024-07-21 12:01:00.070119] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:01.367 [2024-07-21 12:01:00.070126] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:01.367 [2024-07-21 12:01:00.070133] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:01.367 [2024-07-21 12:01:00.070139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:01.367 [2024-07-21 12:01:00.070146] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:01.367 [2024-07-21 12:01:00.070153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.367 [2024-07-21 12:01:00.070164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:01.367 [2024-07-21 12:01:00.070171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.857 ms 00:19:01.367 [2024-07-21 12:01:00.070178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.072058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.367 [2024-07-21 12:01:00.072080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:01.367 [2024-07-21 12:01:00.072088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.856 ms 00:19:01.367 [2024-07-21 12:01:00.072104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.072205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:01.367 [2024-07-21 12:01:00.072219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:01.367 [2024-07-21 12:01:00.072227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:19:01.367 [2024-07-21 12:01:00.072234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.077534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.367 [2024-07-21 12:01:00.077556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:01.367 [2024-07-21 12:01:00.077564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.367 [2024-07-21 12:01:00.077584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.077627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.367 [2024-07-21 12:01:00.077635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:01.367 [2024-07-21 12:01:00.077642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.367 [2024-07-21 12:01:00.077649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.077700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.367 [2024-07-21 12:01:00.077711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:01.367 [2024-07-21 12:01:00.077719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.367 [2024-07-21 12:01:00.077727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.077740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.367 [2024-07-21 12:01:00.077752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:01.367 [2024-07-21 12:01:00.077759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.367 [2024-07-21 12:01:00.077766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.090281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.367 [2024-07-21 12:01:00.090320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:01.367 [2024-07-21 12:01:00.090330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.367 [2024-07-21 12:01:00.090337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.098308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.367 [2024-07-21 12:01:00.098340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:01.367 [2024-07-21 12:01:00.098350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.367 [2024-07-21 12:01:00.098358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.098403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.367 [2024-07-21 12:01:00.098412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:01.367 [2024-07-21 12:01:00.098420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.367 [2024-07-21 12:01:00.098427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.098449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.367 [2024-07-21 12:01:00.098467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:01.367 [2024-07-21 12:01:00.098479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.367 [2024-07-21 12:01:00.098487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.098550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.367 [2024-07-21 12:01:00.098561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:01.367 [2024-07-21 12:01:00.098568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.367 [2024-07-21 12:01:00.098576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.098608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.367 [2024-07-21 12:01:00.098619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:01.367 [2024-07-21 12:01:00.098627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.367 [2024-07-21 12:01:00.098637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.098672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.367 [2024-07-21 12:01:00.098681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:01.367 [2024-07-21 12:01:00.098690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.367 [2024-07-21 12:01:00.098696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.098736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:01.367 [2024-07-21 12:01:00.098745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:01.367 [2024-07-21 12:01:00.098755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:01.367 [2024-07-21 12:01:00.098764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:01.367 [2024-07-21 12:01:00.098884] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 65.454 ms, result 0 00:19:01.937 00:19:01.937 00:19:01.937 12:01:00 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:01.937 [2024-07-21 12:01:00.606115] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:01.937 [2024-07-21 12:01:00.606240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91012 ] 00:19:01.937 [2024-07-21 12:01:00.771996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.196 [2024-07-21 12:01:00.816072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.197 [2024-07-21 12:01:00.916422] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:02.197 [2024-07-21 12:01:00.916488] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:02.458 [2024-07-21 12:01:01.063285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.458 [2024-07-21 12:01:01.063334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:02.458 [2024-07-21 12:01:01.063357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:02.458 [2024-07-21 12:01:01.063365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.458 [2024-07-21 12:01:01.063421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.458 [2024-07-21 12:01:01.063433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:02.458 [2024-07-21 12:01:01.063440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:02.458 [2024-07-21 12:01:01.063450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.458 [2024-07-21 12:01:01.063468] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:02.458 [2024-07-21 12:01:01.063656] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:02.458 [2024-07-21 12:01:01.063683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.458 [2024-07-21 12:01:01.063694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:02.458 [2024-07-21 12:01:01.063702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:19:02.458 [2024-07-21 12:01:01.063709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.458 [2024-07-21 12:01:01.065058] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:02.458 [2024-07-21 12:01:01.067423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.458 [2024-07-21 12:01:01.067467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:02.458 [2024-07-21 12:01:01.067480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.372 ms 00:19:02.458 [2024-07-21 12:01:01.067488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.458 [2024-07-21 12:01:01.067542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.458 [2024-07-21 12:01:01.067563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:02.458 [2024-07-21 12:01:01.067571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:02.458 [2024-07-21 12:01:01.067579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.458 [2024-07-21 12:01:01.074186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.458 [2024-07-21 12:01:01.074216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:02.458 [2024-07-21 12:01:01.074225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.574 ms 00:19:02.458 [2024-07-21 12:01:01.074232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.458 [2024-07-21 12:01:01.074456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.458 [2024-07-21 12:01:01.074482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:02.458 [2024-07-21 12:01:01.074492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:19:02.458 [2024-07-21 12:01:01.074500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.458 [2024-07-21 12:01:01.074543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.458 [2024-07-21 12:01:01.074556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:02.459 [2024-07-21 12:01:01.074569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:02.459 [2024-07-21 12:01:01.074585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.459 [2024-07-21 12:01:01.074611] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:02.459 [2024-07-21 12:01:01.076185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.459 [2024-07-21 12:01:01.076211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:02.459 [2024-07-21 12:01:01.076220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.585 ms 00:19:02.459 [2024-07-21 12:01:01.076227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.459 [2024-07-21 12:01:01.076256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.459 [2024-07-21 12:01:01.076265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:02.459 [2024-07-21 12:01:01.076277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:02.459 [2024-07-21 12:01:01.076293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.459 [2024-07-21 12:01:01.076313] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:02.459 [2024-07-21 12:01:01.076339] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:02.459 [2024-07-21 12:01:01.076370] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:02.459 [2024-07-21 12:01:01.076397] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:02.459 [2024-07-21 12:01:01.076478] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:02.459 [2024-07-21 12:01:01.076504] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:02.459 [2024-07-21 12:01:01.076516] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:02.459 [2024-07-21 12:01:01.076526] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:02.459 [2024-07-21 12:01:01.076535] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:02.459 [2024-07-21 12:01:01.076543] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:02.459 [2024-07-21 12:01:01.076557] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:02.459 [2024-07-21 12:01:01.076571] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:02.459 [2024-07-21 12:01:01.076578] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:02.459 [2024-07-21 12:01:01.076587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.459 [2024-07-21 12:01:01.076595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:02.459 [2024-07-21 12:01:01.076609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:19:02.459 [2024-07-21 12:01:01.076620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.459 [2024-07-21 12:01:01.076682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.459 [2024-07-21 12:01:01.076690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:02.459 [2024-07-21 12:01:01.076698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:19:02.459 [2024-07-21 12:01:01.076706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.459 [2024-07-21 12:01:01.076789] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:02.459 [2024-07-21 12:01:01.076799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:02.459 [2024-07-21 12:01:01.076807] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.459 [2024-07-21 12:01:01.076815] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.459 [2024-07-21 12:01:01.076860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:02.459 [2024-07-21 12:01:01.076869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:02.459 [2024-07-21 12:01:01.076877] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:02.459 [2024-07-21 12:01:01.076884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:02.459 [2024-07-21 12:01:01.076891] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:02.459 [2024-07-21 12:01:01.076898] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.459 [2024-07-21 12:01:01.076905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:02.459 [2024-07-21 12:01:01.076912] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:02.459 [2024-07-21 12:01:01.076928] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:02.459 [2024-07-21 12:01:01.076935] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:02.459 [2024-07-21 12:01:01.076942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:02.459 [2024-07-21 12:01:01.076948] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.459 [2024-07-21 12:01:01.076957] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:02.459 [2024-07-21 12:01:01.076963] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:02.459 [2024-07-21 12:01:01.076969] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.459 [2024-07-21 12:01:01.076976] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:02.459 [2024-07-21 12:01:01.076983] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:02.459 [2024-07-21 12:01:01.076989] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.459 [2024-07-21 12:01:01.076995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:02.459 [2024-07-21 12:01:01.077001] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:02.459 [2024-07-21 12:01:01.077007] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.459 [2024-07-21 12:01:01.077013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:02.459 [2024-07-21 12:01:01.077019] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:02.459 [2024-07-21 12:01:01.077025] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.459 [2024-07-21 12:01:01.077032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:02.459 [2024-07-21 12:01:01.077038] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:02.459 [2024-07-21 12:01:01.077044] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:02.459 [2024-07-21 12:01:01.077050] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:02.459 [2024-07-21 12:01:01.077059] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:02.459 [2024-07-21 12:01:01.077066] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.459 [2024-07-21 12:01:01.077072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:02.459 [2024-07-21 12:01:01.077078] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:02.459 [2024-07-21 12:01:01.077084] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:02.459 [2024-07-21 12:01:01.077090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:02.459 [2024-07-21 12:01:01.077097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:02.459 [2024-07-21 12:01:01.077103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.459 [2024-07-21 12:01:01.077109] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:02.459 [2024-07-21 12:01:01.077115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:02.459 [2024-07-21 12:01:01.077122] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.459 [2024-07-21 12:01:01.077128] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:02.459 [2024-07-21 12:01:01.077135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:02.459 [2024-07-21 12:01:01.077142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:02.459 [2024-07-21 12:01:01.077150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:02.459 [2024-07-21 12:01:01.077157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:02.459 [2024-07-21 12:01:01.077167] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:02.459 [2024-07-21 12:01:01.077174] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:02.459 [2024-07-21 12:01:01.077180] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:02.459 [2024-07-21 12:01:01.077186] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:02.459 [2024-07-21 12:01:01.077192] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:02.459 [2024-07-21 12:01:01.077199] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:02.459 [2024-07-21 12:01:01.077214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.459 [2024-07-21 12:01:01.077223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:02.459 [2024-07-21 12:01:01.077231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:02.459 [2024-07-21 12:01:01.077238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:02.459 [2024-07-21 12:01:01.077244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:02.459 [2024-07-21 12:01:01.077251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:02.459 [2024-07-21 12:01:01.077258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:02.459 [2024-07-21 12:01:01.077266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:02.459 [2024-07-21 12:01:01.077272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:02.459 [2024-07-21 12:01:01.077279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:02.459 [2024-07-21 12:01:01.077288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:02.459 [2024-07-21 12:01:01.077295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:02.459 [2024-07-21 12:01:01.077302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:02.459 [2024-07-21 12:01:01.077309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:02.459 [2024-07-21 12:01:01.077315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:02.459 [2024-07-21 12:01:01.077321] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:02.459 [2024-07-21 12:01:01.077329] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:02.459 [2024-07-21 12:01:01.077336] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:02.459 [2024-07-21 12:01:01.077343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:02.459 [2024-07-21 12:01:01.077362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:02.460 [2024-07-21 12:01:01.077370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:02.460 [2024-07-21 12:01:01.077378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.077386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:02.460 [2024-07-21 12:01:01.077396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:19:02.460 [2024-07-21 12:01:01.077402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.096162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.096194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:02.460 [2024-07-21 12:01:01.096206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.751 ms 00:19:02.460 [2024-07-21 12:01:01.096214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.096286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.096295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:02.460 [2024-07-21 12:01:01.096303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:02.460 [2024-07-21 12:01:01.096314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.105761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.105795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:02.460 [2024-07-21 12:01:01.105806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.401 ms 00:19:02.460 [2024-07-21 12:01:01.105814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.105857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.105866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:02.460 [2024-07-21 12:01:01.105876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:02.460 [2024-07-21 12:01:01.105888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.106328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.106352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:02.460 [2024-07-21 12:01:01.106362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:19:02.460 [2024-07-21 12:01:01.106370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.106484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.106503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:02.460 [2024-07-21 12:01:01.106530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:19:02.460 [2024-07-21 12:01:01.106539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.112262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.112292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:02.460 [2024-07-21 12:01:01.112302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.698 ms 00:19:02.460 [2024-07-21 12:01:01.112310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.114874] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:02.460 [2024-07-21 12:01:01.114906] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:02.460 [2024-07-21 12:01:01.114927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.114937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:02.460 [2024-07-21 12:01:01.114947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.540 ms 00:19:02.460 [2024-07-21 12:01:01.114954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.127546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.127598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:02.460 [2024-07-21 12:01:01.127619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.579 ms 00:19:02.460 [2024-07-21 12:01:01.127626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.129374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.129409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:02.460 [2024-07-21 12:01:01.129418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.714 ms 00:19:02.460 [2024-07-21 12:01:01.129425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.130896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.130926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:02.460 [2024-07-21 12:01:01.130935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.442 ms 00:19:02.460 [2024-07-21 12:01:01.130942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.131205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.131220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:02.460 [2024-07-21 12:01:01.131230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:19:02.460 [2024-07-21 12:01:01.131240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.151471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.151549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:02.460 [2024-07-21 12:01:01.151573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.250 ms 00:19:02.460 [2024-07-21 12:01:01.151581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.157497] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:02.460 [2024-07-21 12:01:01.160305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.160334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:02.460 [2024-07-21 12:01:01.160344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.686 ms 00:19:02.460 [2024-07-21 12:01:01.160362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.160427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.160437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:02.460 [2024-07-21 12:01:01.160445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:02.460 [2024-07-21 12:01:01.160453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.160530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.160550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:02.460 [2024-07-21 12:01:01.160566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:02.460 [2024-07-21 12:01:01.160573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.160592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.160607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:02.460 [2024-07-21 12:01:01.160614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:02.460 [2024-07-21 12:01:01.160621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.160651] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:02.460 [2024-07-21 12:01:01.160660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.160667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:02.460 [2024-07-21 12:01:01.160676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:02.460 [2024-07-21 12:01:01.160683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.164331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.164365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:02.460 [2024-07-21 12:01:01.164375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.637 ms 00:19:02.460 [2024-07-21 12:01:01.164384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.164445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:02.460 [2024-07-21 12:01:01.164454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:02.460 [2024-07-21 12:01:01.164463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:02.460 [2024-07-21 12:01:01.164475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:02.460 [2024-07-21 12:01:01.165452] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 101.983 ms, result 0 00:19:36.777  Copying: 29/1024 [MB] (29 MBps) Copying: 60/1024 [MB] (30 MBps) Copying: 90/1024 [MB] (30 MBps) Copying: 119/1024 [MB] (29 MBps) Copying: 149/1024 [MB] (29 MBps) Copying: 179/1024 [MB] (29 MBps) Copying: 208/1024 [MB] (29 MBps) Copying: 237/1024 [MB] (29 MBps) Copying: 266/1024 [MB] (29 MBps) Copying: 295/1024 [MB] (28 MBps) Copying: 324/1024 [MB] (28 MBps) Copying: 353/1024 [MB] (29 MBps) Copying: 383/1024 [MB] (29 MBps) Copying: 413/1024 [MB] (30 MBps) Copying: 444/1024 [MB] (30 MBps) Copying: 475/1024 [MB] (31 MBps) Copying: 505/1024 [MB] (30 MBps) Copying: 535/1024 [MB] (29 MBps) Copying: 565/1024 [MB] (30 MBps) Copying: 595/1024 [MB] (30 MBps) Copying: 627/1024 [MB] (31 MBps) Copying: 657/1024 [MB] (30 MBps) Copying: 688/1024 [MB] (30 MBps) Copying: 718/1024 [MB] (30 MBps) Copying: 749/1024 [MB] (30 MBps) Copying: 779/1024 [MB] (30 MBps) Copying: 811/1024 [MB] (31 MBps) Copying: 842/1024 [MB] (31 MBps) Copying: 874/1024 [MB] (31 MBps) Copying: 905/1024 [MB] (30 MBps) Copying: 935/1024 [MB] (30 MBps) Copying: 967/1024 [MB] (31 MBps) Copying: 998/1024 [MB] (31 MBps) Copying: 1024/1024 [MB] (average 30 MBps)[2024-07-21 12:01:35.576146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.777 [2024-07-21 12:01:35.576216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:36.777 [2024-07-21 12:01:35.576238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:36.777 [2024-07-21 12:01:35.576247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.777 [2024-07-21 12:01:35.576270] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:36.777 [2024-07-21 12:01:35.576947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.777 [2024-07-21 12:01:35.576979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:36.777 [2024-07-21 12:01:35.576989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:19:36.777 [2024-07-21 12:01:35.576997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.777 [2024-07-21 12:01:35.577195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.777 [2024-07-21 12:01:35.577211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:36.777 [2024-07-21 12:01:35.577223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.167 ms 00:19:36.777 [2024-07-21 12:01:35.577230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.777 [2024-07-21 12:01:35.579836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.777 [2024-07-21 12:01:35.579861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:36.777 [2024-07-21 12:01:35.579882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.598 ms 00:19:36.777 [2024-07-21 12:01:35.579891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.777 [2024-07-21 12:01:35.585760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.777 [2024-07-21 12:01:35.585793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:36.777 [2024-07-21 12:01:35.585803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.859 ms 00:19:36.777 [2024-07-21 12:01:35.585816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.777 [2024-07-21 12:01:35.587226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.777 [2024-07-21 12:01:35.587269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:36.777 [2024-07-21 12:01:35.587281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.350 ms 00:19:36.777 [2024-07-21 12:01:35.587289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.777 [2024-07-21 12:01:35.591291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.777 [2024-07-21 12:01:35.591331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:36.777 [2024-07-21 12:01:35.591342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.978 ms 00:19:36.777 [2024-07-21 12:01:35.591350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.777 [2024-07-21 12:01:35.591457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.777 [2024-07-21 12:01:35.591469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:36.777 [2024-07-21 12:01:35.591478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:19:36.777 [2024-07-21 12:01:35.591490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.777 [2024-07-21 12:01:35.593291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.777 [2024-07-21 12:01:35.593324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:36.777 [2024-07-21 12:01:35.593334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.790 ms 00:19:36.777 [2024-07-21 12:01:35.593341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.777 [2024-07-21 12:01:35.594993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.777 [2024-07-21 12:01:35.595024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:36.777 [2024-07-21 12:01:35.595033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.628 ms 00:19:36.777 [2024-07-21 12:01:35.595040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.777 [2024-07-21 12:01:35.596146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.777 [2024-07-21 12:01:35.596179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:36.777 [2024-07-21 12:01:35.596189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.084 ms 00:19:36.777 [2024-07-21 12:01:35.596214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.777 [2024-07-21 12:01:35.597266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.777 [2024-07-21 12:01:35.597300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:36.777 [2024-07-21 12:01:35.597309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.991 ms 00:19:36.777 [2024-07-21 12:01:35.597317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.777 [2024-07-21 12:01:35.597339] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:36.777 [2024-07-21 12:01:35.597355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:36.777 [2024-07-21 12:01:35.597570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.597993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:36.778 [2024-07-21 12:01:35.598197] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:36.778 [2024-07-21 12:01:35.598206] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 470802e4-3beb-44b2-b990-f45a7745233c 00:19:36.778 [2024-07-21 12:01:35.598214] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:36.778 [2024-07-21 12:01:35.598222] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:36.778 [2024-07-21 12:01:35.598228] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:36.778 [2024-07-21 12:01:35.598248] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:36.778 [2024-07-21 12:01:35.598255] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:36.778 [2024-07-21 12:01:35.598264] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:36.778 [2024-07-21 12:01:35.598274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:36.778 [2024-07-21 12:01:35.598281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:36.778 [2024-07-21 12:01:35.598287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:36.778 [2024-07-21 12:01:35.598295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.778 [2024-07-21 12:01:35.598302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:36.778 [2024-07-21 12:01:35.598310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.958 ms 00:19:36.778 [2024-07-21 12:01:35.598317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.600212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.778 [2024-07-21 12:01:35.600235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:36.778 [2024-07-21 12:01:35.600245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.881 ms 00:19:36.778 [2024-07-21 12:01:35.600252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.600358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:36.778 [2024-07-21 12:01:35.600379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:36.778 [2024-07-21 12:01:35.600395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:19:36.778 [2024-07-21 12:01:35.600419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.605696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.778 [2024-07-21 12:01:35.605719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:36.778 [2024-07-21 12:01:35.605732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.778 [2024-07-21 12:01:35.605742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.605786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.778 [2024-07-21 12:01:35.605795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:36.778 [2024-07-21 12:01:35.605803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.778 [2024-07-21 12:01:35.605828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.605867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.778 [2024-07-21 12:01:35.605878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:36.778 [2024-07-21 12:01:35.605886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.778 [2024-07-21 12:01:35.605893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.605911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.778 [2024-07-21 12:01:35.605919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:36.778 [2024-07-21 12:01:35.605933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.778 [2024-07-21 12:01:35.605941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.619109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.778 [2024-07-21 12:01:35.619145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:36.778 [2024-07-21 12:01:35.619155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.778 [2024-07-21 12:01:35.619169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.627622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.778 [2024-07-21 12:01:35.627658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:36.778 [2024-07-21 12:01:35.627671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.778 [2024-07-21 12:01:35.627681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.627728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.778 [2024-07-21 12:01:35.627739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:36.778 [2024-07-21 12:01:35.627749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.778 [2024-07-21 12:01:35.627769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.627797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.778 [2024-07-21 12:01:35.627812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:36.778 [2024-07-21 12:01:35.627832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.778 [2024-07-21 12:01:35.627849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.627923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.778 [2024-07-21 12:01:35.627934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:36.778 [2024-07-21 12:01:35.627943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.778 [2024-07-21 12:01:35.627952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.627990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.778 [2024-07-21 12:01:35.628002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:36.778 [2024-07-21 12:01:35.628015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.778 [2024-07-21 12:01:35.628024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.628063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.778 [2024-07-21 12:01:35.628073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:36.778 [2024-07-21 12:01:35.628082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.778 [2024-07-21 12:01:35.628091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.628147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:36.778 [2024-07-21 12:01:35.628162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:36.778 [2024-07-21 12:01:35.628171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:36.778 [2024-07-21 12:01:35.628179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:36.778 [2024-07-21 12:01:35.628312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 52.225 ms, result 0 00:19:37.035 00:19:37.035 00:19:37.035 12:01:35 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:38.933 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:38.933 12:01:37 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:38.933 [2024-07-21 12:01:37.783857] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:19:38.933 [2024-07-21 12:01:37.783980] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91400 ] 00:19:39.191 [2024-07-21 12:01:37.948203] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.191 [2024-07-21 12:01:37.994065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.450 [2024-07-21 12:01:38.094883] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:39.450 [2024-07-21 12:01:38.094955] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:39.450 [2024-07-21 12:01:38.242209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.450 [2024-07-21 12:01:38.242264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:39.450 [2024-07-21 12:01:38.242281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:39.450 [2024-07-21 12:01:38.242299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.450 [2024-07-21 12:01:38.242354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.450 [2024-07-21 12:01:38.242366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:39.450 [2024-07-21 12:01:38.242375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:39.450 [2024-07-21 12:01:38.242385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.450 [2024-07-21 12:01:38.242404] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:39.450 [2024-07-21 12:01:38.242613] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:39.450 [2024-07-21 12:01:38.242637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.450 [2024-07-21 12:01:38.242654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:39.450 [2024-07-21 12:01:38.242664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:19:39.450 [2024-07-21 12:01:38.242679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.450 [2024-07-21 12:01:38.244122] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:39.450 [2024-07-21 12:01:38.246534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.450 [2024-07-21 12:01:38.246567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:39.450 [2024-07-21 12:01:38.246584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.417 ms 00:19:39.450 [2024-07-21 12:01:38.246617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.450 [2024-07-21 12:01:38.246675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.450 [2024-07-21 12:01:38.246686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:39.450 [2024-07-21 12:01:38.246696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:39.450 [2024-07-21 12:01:38.246704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.450 [2024-07-21 12:01:38.253314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.450 [2024-07-21 12:01:38.253342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:39.450 [2024-07-21 12:01:38.253352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.573 ms 00:19:39.450 [2024-07-21 12:01:38.253370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.450 [2024-07-21 12:01:38.253467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.450 [2024-07-21 12:01:38.253479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:39.450 [2024-07-21 12:01:38.253488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:39.450 [2024-07-21 12:01:38.253495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.450 [2024-07-21 12:01:38.253546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.450 [2024-07-21 12:01:38.253558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:39.450 [2024-07-21 12:01:38.253566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:39.450 [2024-07-21 12:01:38.253573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.450 [2024-07-21 12:01:38.253598] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:39.450 [2024-07-21 12:01:38.255216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.450 [2024-07-21 12:01:38.255245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:39.450 [2024-07-21 12:01:38.255255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.629 ms 00:19:39.450 [2024-07-21 12:01:38.255273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.450 [2024-07-21 12:01:38.255304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.450 [2024-07-21 12:01:38.255315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:39.450 [2024-07-21 12:01:38.255324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:39.450 [2024-07-21 12:01:38.255331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.450 [2024-07-21 12:01:38.255359] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:39.450 [2024-07-21 12:01:38.255380] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:39.450 [2024-07-21 12:01:38.255425] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:39.450 [2024-07-21 12:01:38.255446] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:19:39.450 [2024-07-21 12:01:38.255539] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:39.450 [2024-07-21 12:01:38.255549] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:39.450 [2024-07-21 12:01:38.255561] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:19:39.450 [2024-07-21 12:01:38.255572] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:39.450 [2024-07-21 12:01:38.255581] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:39.450 [2024-07-21 12:01:38.255590] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:39.450 [2024-07-21 12:01:38.255598] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:39.450 [2024-07-21 12:01:38.255612] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:39.450 [2024-07-21 12:01:38.255619] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:39.450 [2024-07-21 12:01:38.255634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.450 [2024-07-21 12:01:38.255650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:39.450 [2024-07-21 12:01:38.255668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:19:39.450 [2024-07-21 12:01:38.255686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.450 [2024-07-21 12:01:38.255758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.450 [2024-07-21 12:01:38.255766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:39.450 [2024-07-21 12:01:38.255774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:39.450 [2024-07-21 12:01:38.255781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.450 [2024-07-21 12:01:38.255879] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:39.450 [2024-07-21 12:01:38.255891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:39.450 [2024-07-21 12:01:38.255899] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:39.450 [2024-07-21 12:01:38.255910] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.450 [2024-07-21 12:01:38.255917] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:39.450 [2024-07-21 12:01:38.255924] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:39.450 [2024-07-21 12:01:38.255931] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:39.450 [2024-07-21 12:01:38.255938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:39.450 [2024-07-21 12:01:38.255946] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:39.450 [2024-07-21 12:01:38.255954] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:39.450 [2024-07-21 12:01:38.255960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:39.450 [2024-07-21 12:01:38.255967] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:39.450 [2024-07-21 12:01:38.255985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:39.450 [2024-07-21 12:01:38.255992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:39.451 [2024-07-21 12:01:38.255999] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:39.451 [2024-07-21 12:01:38.256006] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.451 [2024-07-21 12:01:38.256015] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:39.451 [2024-07-21 12:01:38.256022] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:39.451 [2024-07-21 12:01:38.256029] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.451 [2024-07-21 12:01:38.256036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:39.451 [2024-07-21 12:01:38.256043] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:39.451 [2024-07-21 12:01:38.256049] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.451 [2024-07-21 12:01:38.256056] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:39.451 [2024-07-21 12:01:38.256063] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:39.451 [2024-07-21 12:01:38.256069] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.451 [2024-07-21 12:01:38.256076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:39.451 [2024-07-21 12:01:38.256082] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:39.451 [2024-07-21 12:01:38.256089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.451 [2024-07-21 12:01:38.256095] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:39.451 [2024-07-21 12:01:38.256102] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:39.451 [2024-07-21 12:01:38.256108] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.451 [2024-07-21 12:01:38.256114] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:39.451 [2024-07-21 12:01:38.256125] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:39.451 [2024-07-21 12:01:38.256131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:39.451 [2024-07-21 12:01:38.256138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:39.451 [2024-07-21 12:01:38.256145] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:39.451 [2024-07-21 12:01:38.256151] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:39.451 [2024-07-21 12:01:38.256157] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:39.451 [2024-07-21 12:01:38.256164] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:39.451 [2024-07-21 12:01:38.256170] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.451 [2024-07-21 12:01:38.256176] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:39.451 [2024-07-21 12:01:38.256182] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:39.451 [2024-07-21 12:01:38.256189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.451 [2024-07-21 12:01:38.256196] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:39.451 [2024-07-21 12:01:38.256203] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:39.451 [2024-07-21 12:01:38.256218] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:39.451 [2024-07-21 12:01:38.256225] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.451 [2024-07-21 12:01:38.256232] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:39.451 [2024-07-21 12:01:38.256242] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:39.451 [2024-07-21 12:01:38.256250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:39.451 [2024-07-21 12:01:38.256256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:39.451 [2024-07-21 12:01:38.256262] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:39.451 [2024-07-21 12:01:38.256269] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:39.451 [2024-07-21 12:01:38.256277] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:39.451 [2024-07-21 12:01:38.256286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:39.451 [2024-07-21 12:01:38.256294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:39.451 [2024-07-21 12:01:38.256302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:39.451 [2024-07-21 12:01:38.256309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:39.451 [2024-07-21 12:01:38.256316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:39.451 [2024-07-21 12:01:38.256323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:39.451 [2024-07-21 12:01:38.256330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:39.451 [2024-07-21 12:01:38.256337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:39.451 [2024-07-21 12:01:38.256344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:39.451 [2024-07-21 12:01:38.256352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:39.451 [2024-07-21 12:01:38.256361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:39.451 [2024-07-21 12:01:38.256369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:39.451 [2024-07-21 12:01:38.256376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:39.451 [2024-07-21 12:01:38.256383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:39.451 [2024-07-21 12:01:38.256390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:39.451 [2024-07-21 12:01:38.256397] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:39.451 [2024-07-21 12:01:38.256411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:39.451 [2024-07-21 12:01:38.256429] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:39.451 [2024-07-21 12:01:38.256437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:39.451 [2024-07-21 12:01:38.256444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:39.451 [2024-07-21 12:01:38.256452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:39.451 [2024-07-21 12:01:38.256460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.451 [2024-07-21 12:01:38.256467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:39.451 [2024-07-21 12:01:38.256478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.651 ms 00:19:39.451 [2024-07-21 12:01:38.256485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.451 [2024-07-21 12:01:38.281330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.451 [2024-07-21 12:01:38.281444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:39.451 [2024-07-21 12:01:38.281485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.837 ms 00:19:39.451 [2024-07-21 12:01:38.281533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.451 [2024-07-21 12:01:38.281656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.451 [2024-07-21 12:01:38.281696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:39.451 [2024-07-21 12:01:38.281777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:39.451 [2024-07-21 12:01:38.281836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.451 [2024-07-21 12:01:38.293553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.451 [2024-07-21 12:01:38.293684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:39.451 [2024-07-21 12:01:38.293736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.644 ms 00:19:39.451 [2024-07-21 12:01:38.293773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.451 [2024-07-21 12:01:38.293885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.451 [2024-07-21 12:01:38.293953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:39.451 [2024-07-21 12:01:38.294009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:39.451 [2024-07-21 12:01:38.294059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.451 [2024-07-21 12:01:38.294624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.451 [2024-07-21 12:01:38.294679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:39.451 [2024-07-21 12:01:38.294716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:19:39.451 [2024-07-21 12:01:38.294751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.451 [2024-07-21 12:01:38.294910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.451 [2024-07-21 12:01:38.294958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:39.451 [2024-07-21 12:01:38.294993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:19:39.451 [2024-07-21 12:01:38.295039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.451 [2024-07-21 12:01:38.300993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.451 [2024-07-21 12:01:38.301079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:39.451 [2024-07-21 12:01:38.301111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.925 ms 00:19:39.451 [2024-07-21 12:01:38.301136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.451 [2024-07-21 12:01:38.303853] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:39.451 [2024-07-21 12:01:38.303952] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:39.451 [2024-07-21 12:01:38.303997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.451 [2024-07-21 12:01:38.304022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:39.451 [2024-07-21 12:01:38.304045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.750 ms 00:19:39.451 [2024-07-21 12:01:38.304067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.710 [2024-07-21 12:01:38.317037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.710 [2024-07-21 12:01:38.317109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:39.710 [2024-07-21 12:01:38.317139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.916 ms 00:19:39.710 [2024-07-21 12:01:38.317160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.710 [2024-07-21 12:01:38.318866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.711 [2024-07-21 12:01:38.318933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:39.711 [2024-07-21 12:01:38.318963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.655 ms 00:19:39.711 [2024-07-21 12:01:38.318985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.711 [2024-07-21 12:01:38.320335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.711 [2024-07-21 12:01:38.320403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:39.711 [2024-07-21 12:01:38.320432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.305 ms 00:19:39.711 [2024-07-21 12:01:38.320454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.711 [2024-07-21 12:01:38.320769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.711 [2024-07-21 12:01:38.320829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:39.711 [2024-07-21 12:01:38.320866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:19:39.711 [2024-07-21 12:01:38.320901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.711 [2024-07-21 12:01:38.341256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.711 [2024-07-21 12:01:38.341411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:39.711 [2024-07-21 12:01:38.341443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.330 ms 00:19:39.711 [2024-07-21 12:01:38.341464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.711 [2024-07-21 12:01:38.347528] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:39.711 [2024-07-21 12:01:38.350589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.711 [2024-07-21 12:01:38.350649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:39.711 [2024-07-21 12:01:38.350676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.075 ms 00:19:39.711 [2024-07-21 12:01:38.350695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.711 [2024-07-21 12:01:38.350774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.711 [2024-07-21 12:01:38.350797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:39.711 [2024-07-21 12:01:38.350825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:39.711 [2024-07-21 12:01:38.350834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.711 [2024-07-21 12:01:38.350918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.711 [2024-07-21 12:01:38.350928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:39.711 [2024-07-21 12:01:38.350935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:39.711 [2024-07-21 12:01:38.350943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.711 [2024-07-21 12:01:38.350971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.711 [2024-07-21 12:01:38.350986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:39.711 [2024-07-21 12:01:38.350994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:39.711 [2024-07-21 12:01:38.351001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.711 [2024-07-21 12:01:38.351028] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:39.711 [2024-07-21 12:01:38.351037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.711 [2024-07-21 12:01:38.351047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:39.711 [2024-07-21 12:01:38.351054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:39.711 [2024-07-21 12:01:38.351061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.711 [2024-07-21 12:01:38.354647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.711 [2024-07-21 12:01:38.354682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:39.711 [2024-07-21 12:01:38.354693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.575 ms 00:19:39.711 [2024-07-21 12:01:38.354709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.711 [2024-07-21 12:01:38.354771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.711 [2024-07-21 12:01:38.354780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:39.711 [2024-07-21 12:01:38.354792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:39.711 [2024-07-21 12:01:38.354800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.711 [2024-07-21 12:01:38.355886] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 113.409 ms, result 0 00:20:17.417  Copying: 29/1024 [MB] (29 MBps) Copying: 59/1024 [MB] (29 MBps) Copying: 88/1024 [MB] (29 MBps) Copying: 118/1024 [MB] (29 MBps) Copying: 147/1024 [MB] (29 MBps) Copying: 177/1024 [MB] (29 MBps) Copying: 207/1024 [MB] (29 MBps) Copying: 237/1024 [MB] (30 MBps) Copying: 268/1024 [MB] (30 MBps) Copying: 297/1024 [MB] (29 MBps) Copying: 325/1024 [MB] (27 MBps) Copying: 353/1024 [MB] (27 MBps) Copying: 381/1024 [MB] (27 MBps) Copying: 408/1024 [MB] (26 MBps) Copying: 434/1024 [MB] (25 MBps) Copying: 460/1024 [MB] (26 MBps) Copying: 487/1024 [MB] (26 MBps) Copying: 513/1024 [MB] (26 MBps) Copying: 539/1024 [MB] (26 MBps) Copying: 565/1024 [MB] (25 MBps) Copying: 591/1024 [MB] (25 MBps) Copying: 617/1024 [MB] (25 MBps) Copying: 642/1024 [MB] (25 MBps) Copying: 668/1024 [MB] (25 MBps) Copying: 694/1024 [MB] (25 MBps) Copying: 719/1024 [MB] (25 MBps) Copying: 745/1024 [MB] (25 MBps) Copying: 770/1024 [MB] (25 MBps) Copying: 796/1024 [MB] (25 MBps) Copying: 823/1024 [MB] (26 MBps) Copying: 849/1024 [MB] (25 MBps) Copying: 878/1024 [MB] (29 MBps) Copying: 906/1024 [MB] (28 MBps) Copying: 935/1024 [MB] (28 MBps) Copying: 963/1024 [MB] (28 MBps) Copying: 991/1024 [MB] (27 MBps) Copying: 1019/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-21 12:02:16.083794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.417 [2024-07-21 12:02:16.083875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:17.417 [2024-07-21 12:02:16.083890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:17.417 [2024-07-21 12:02:16.083897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.417 [2024-07-21 12:02:16.084780] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.417 [2024-07-21 12:02:16.087884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.417 [2024-07-21 12:02:16.087924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:17.417 [2024-07-21 12:02:16.087935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.080 ms 00:20:17.417 [2024-07-21 12:02:16.087942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.417 [2024-07-21 12:02:16.099073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.417 [2024-07-21 12:02:16.099107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:17.417 [2024-07-21 12:02:16.099117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.686 ms 00:20:17.417 [2024-07-21 12:02:16.099124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.417 [2024-07-21 12:02:16.120247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.417 [2024-07-21 12:02:16.120301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:17.417 [2024-07-21 12:02:16.120321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.135 ms 00:20:17.417 [2024-07-21 12:02:16.120330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.417 [2024-07-21 12:02:16.125275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.417 [2024-07-21 12:02:16.125301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:17.417 [2024-07-21 12:02:16.125310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.922 ms 00:20:17.417 [2024-07-21 12:02:16.125317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.417 [2024-07-21 12:02:16.126885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.417 [2024-07-21 12:02:16.126914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:17.417 [2024-07-21 12:02:16.126922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.538 ms 00:20:17.417 [2024-07-21 12:02:16.126930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.417 [2024-07-21 12:02:16.131418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.417 [2024-07-21 12:02:16.131452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:17.417 [2024-07-21 12:02:16.131467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.474 ms 00:20:17.417 [2024-07-21 12:02:16.131474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.417 [2024-07-21 12:02:16.219683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.417 [2024-07-21 12:02:16.219739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:17.417 [2024-07-21 12:02:16.219755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.349 ms 00:20:17.417 [2024-07-21 12:02:16.219765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.417 [2024-07-21 12:02:16.222380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.417 [2024-07-21 12:02:16.222419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:17.417 [2024-07-21 12:02:16.222429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.602 ms 00:20:17.417 [2024-07-21 12:02:16.222437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.417 [2024-07-21 12:02:16.224126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.417 [2024-07-21 12:02:16.224163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:17.417 [2024-07-21 12:02:16.224174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.666 ms 00:20:17.417 [2024-07-21 12:02:16.224182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.417 [2024-07-21 12:02:16.225363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.417 [2024-07-21 12:02:16.225397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:17.417 [2024-07-21 12:02:16.225407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.154 ms 00:20:17.417 [2024-07-21 12:02:16.225439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.417 [2024-07-21 12:02:16.226727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.417 [2024-07-21 12:02:16.226762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:17.417 [2024-07-21 12:02:16.226772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:20:17.417 [2024-07-21 12:02:16.226780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.417 [2024-07-21 12:02:16.226805] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:17.417 [2024-07-21 12:02:16.226835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 86016 / 261120 wr_cnt: 1 state: open 00:20:17.417 [2024-07-21 12:02:16.226847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.226994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:17.417 [2024-07-21 12:02:16.227188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:17.418 [2024-07-21 12:02:16.227736] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:17.418 [2024-07-21 12:02:16.227745] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 470802e4-3beb-44b2-b990-f45a7745233c 00:20:17.418 [2024-07-21 12:02:16.227760] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 86016 00:20:17.418 [2024-07-21 12:02:16.227780] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 86976 00:20:17.418 [2024-07-21 12:02:16.227788] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 86016 00:20:17.418 [2024-07-21 12:02:16.227797] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0112 00:20:17.418 [2024-07-21 12:02:16.227805] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:17.418 [2024-07-21 12:02:16.227815] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:17.418 [2024-07-21 12:02:16.227978] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:17.418 [2024-07-21 12:02:16.227999] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:17.418 [2024-07-21 12:02:16.228020] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:17.418 [2024-07-21 12:02:16.228088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.418 [2024-07-21 12:02:16.228113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:17.418 [2024-07-21 12:02:16.228137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.286 ms 00:20:17.418 [2024-07-21 12:02:16.228197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.418 [2024-07-21 12:02:16.230030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.418 [2024-07-21 12:02:16.230048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:17.418 [2024-07-21 12:02:16.230059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.777 ms 00:20:17.418 [2024-07-21 12:02:16.230068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.418 [2024-07-21 12:02:16.230185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.418 [2024-07-21 12:02:16.230201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:17.418 [2024-07-21 12:02:16.230214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:17.418 [2024-07-21 12:02:16.230232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.418 [2024-07-21 12:02:16.235747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.418 [2024-07-21 12:02:16.235777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.418 [2024-07-21 12:02:16.235788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.418 [2024-07-21 12:02:16.235797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.419 [2024-07-21 12:02:16.235864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.419 [2024-07-21 12:02:16.235875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.419 [2024-07-21 12:02:16.235889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.419 [2024-07-21 12:02:16.235898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.419 [2024-07-21 12:02:16.235939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.419 [2024-07-21 12:02:16.235951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.419 [2024-07-21 12:02:16.235960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.419 [2024-07-21 12:02:16.235968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.419 [2024-07-21 12:02:16.235984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.419 [2024-07-21 12:02:16.235994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.419 [2024-07-21 12:02:16.236003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.419 [2024-07-21 12:02:16.236015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.419 [2024-07-21 12:02:16.249375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.419 [2024-07-21 12:02:16.249421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.419 [2024-07-21 12:02:16.249432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.419 [2024-07-21 12:02:16.249441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.419 [2024-07-21 12:02:16.257873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.419 [2024-07-21 12:02:16.257910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.419 [2024-07-21 12:02:16.257921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.419 [2024-07-21 12:02:16.257934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.419 [2024-07-21 12:02:16.257983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.419 [2024-07-21 12:02:16.258002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.419 [2024-07-21 12:02:16.258011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.419 [2024-07-21 12:02:16.258019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.419 [2024-07-21 12:02:16.258042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.419 [2024-07-21 12:02:16.258051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.419 [2024-07-21 12:02:16.258067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.419 [2024-07-21 12:02:16.258075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.419 [2024-07-21 12:02:16.258162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.419 [2024-07-21 12:02:16.258181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.419 [2024-07-21 12:02:16.258191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.419 [2024-07-21 12:02:16.258199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.419 [2024-07-21 12:02:16.258247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.419 [2024-07-21 12:02:16.258264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:17.419 [2024-07-21 12:02:16.258272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.419 [2024-07-21 12:02:16.258280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.419 [2024-07-21 12:02:16.258320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.419 [2024-07-21 12:02:16.258329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.419 [2024-07-21 12:02:16.258337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.419 [2024-07-21 12:02:16.258344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.419 [2024-07-21 12:02:16.258384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.419 [2024-07-21 12:02:16.258394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.419 [2024-07-21 12:02:16.258402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.419 [2024-07-21 12:02:16.258418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.419 [2024-07-21 12:02:16.258565] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 177.402 ms, result 0 00:20:18.797 00:20:18.797 00:20:18.798 12:02:17 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:20:18.798 [2024-07-21 12:02:17.627706] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:18.798 [2024-07-21 12:02:17.627858] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91805 ] 00:20:19.057 [2024-07-21 12:02:17.801335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:19.057 [2024-07-21 12:02:17.853928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.318 [2024-07-21 12:02:17.958963] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.318 [2024-07-21 12:02:17.959032] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.318 [2024-07-21 12:02:18.108097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.318 [2024-07-21 12:02:18.108161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:19.318 [2024-07-21 12:02:18.108184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:19.318 [2024-07-21 12:02:18.108196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.318 [2024-07-21 12:02:18.108258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.318 [2024-07-21 12:02:18.108276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.318 [2024-07-21 12:02:18.108286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:19.318 [2024-07-21 12:02:18.108299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.318 [2024-07-21 12:02:18.108320] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:19.318 [2024-07-21 12:02:18.108584] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:19.318 [2024-07-21 12:02:18.108603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.318 [2024-07-21 12:02:18.108623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.318 [2024-07-21 12:02:18.108632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:20:19.318 [2024-07-21 12:02:18.108641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.318 [2024-07-21 12:02:18.110139] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:19.318 [2024-07-21 12:02:18.112766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.318 [2024-07-21 12:02:18.112815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:19.318 [2024-07-21 12:02:18.112848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.635 ms 00:20:19.318 [2024-07-21 12:02:18.112857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.318 [2024-07-21 12:02:18.112917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.318 [2024-07-21 12:02:18.112928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:19.318 [2024-07-21 12:02:18.112937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:19.318 [2024-07-21 12:02:18.112946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.318 [2024-07-21 12:02:18.119789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.318 [2024-07-21 12:02:18.119837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.318 [2024-07-21 12:02:18.119849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.807 ms 00:20:19.318 [2024-07-21 12:02:18.119857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.318 [2024-07-21 12:02:18.119952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.318 [2024-07-21 12:02:18.119967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.318 [2024-07-21 12:02:18.119976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:19.318 [2024-07-21 12:02:18.119986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.318 [2024-07-21 12:02:18.120058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.318 [2024-07-21 12:02:18.120084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:19.318 [2024-07-21 12:02:18.120094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:19.318 [2024-07-21 12:02:18.120131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.318 [2024-07-21 12:02:18.120165] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.318 [2024-07-21 12:02:18.121911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.318 [2024-07-21 12:02:18.121939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.318 [2024-07-21 12:02:18.121950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.753 ms 00:20:19.318 [2024-07-21 12:02:18.121959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.318 [2024-07-21 12:02:18.121996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.318 [2024-07-21 12:02:18.122007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:19.318 [2024-07-21 12:02:18.122019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:19.318 [2024-07-21 12:02:18.122028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.318 [2024-07-21 12:02:18.122050] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:19.318 [2024-07-21 12:02:18.122072] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:19.318 [2024-07-21 12:02:18.122113] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:19.318 [2024-07-21 12:02:18.122139] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:20:19.318 [2024-07-21 12:02:18.122252] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:19.318 [2024-07-21 12:02:18.122275] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:19.318 [2024-07-21 12:02:18.122287] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:20:19.318 [2024-07-21 12:02:18.122299] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:19.318 [2024-07-21 12:02:18.122319] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:19.318 [2024-07-21 12:02:18.122328] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:19.318 [2024-07-21 12:02:18.122337] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:19.318 [2024-07-21 12:02:18.122344] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:19.319 [2024-07-21 12:02:18.122353] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:19.319 [2024-07-21 12:02:18.122370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.319 [2024-07-21 12:02:18.122378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:19.319 [2024-07-21 12:02:18.122387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:20:19.319 [2024-07-21 12:02:18.122398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.319 [2024-07-21 12:02:18.122475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.319 [2024-07-21 12:02:18.122484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:19.319 [2024-07-21 12:02:18.122492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:20:19.319 [2024-07-21 12:02:18.122500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.319 [2024-07-21 12:02:18.122600] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:19.319 [2024-07-21 12:02:18.122619] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:19.319 [2024-07-21 12:02:18.122632] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.319 [2024-07-21 12:02:18.122642] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.319 [2024-07-21 12:02:18.122653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:19.319 [2024-07-21 12:02:18.122661] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:19.319 [2024-07-21 12:02:18.122669] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:19.319 [2024-07-21 12:02:18.122822] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:19.319 [2024-07-21 12:02:18.122847] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:19.319 [2024-07-21 12:02:18.122856] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.319 [2024-07-21 12:02:18.122864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:19.319 [2024-07-21 12:02:18.122871] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:19.319 [2024-07-21 12:02:18.122890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.319 [2024-07-21 12:02:18.122899] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:19.319 [2024-07-21 12:02:18.122907] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:19.319 [2024-07-21 12:02:18.122915] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.319 [2024-07-21 12:02:18.122923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:19.319 [2024-07-21 12:02:18.122931] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:19.319 [2024-07-21 12:02:18.122941] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.319 [2024-07-21 12:02:18.122949] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:19.319 [2024-07-21 12:02:18.122957] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:19.319 [2024-07-21 12:02:18.122964] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.319 [2024-07-21 12:02:18.122972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:19.319 [2024-07-21 12:02:18.122979] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:19.319 [2024-07-21 12:02:18.122986] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.319 [2024-07-21 12:02:18.122994] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:19.319 [2024-07-21 12:02:18.123002] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:19.319 [2024-07-21 12:02:18.123030] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.319 [2024-07-21 12:02:18.123038] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:19.319 [2024-07-21 12:02:18.123046] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:19.319 [2024-07-21 12:02:18.123053] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.319 [2024-07-21 12:02:18.123072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:19.319 [2024-07-21 12:02:18.123080] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:19.319 [2024-07-21 12:02:18.123088] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.319 [2024-07-21 12:02:18.123100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:19.319 [2024-07-21 12:02:18.123109] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:19.319 [2024-07-21 12:02:18.123116] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.319 [2024-07-21 12:02:18.123124] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:19.319 [2024-07-21 12:02:18.123131] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:19.319 [2024-07-21 12:02:18.123138] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.319 [2024-07-21 12:02:18.123145] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:19.319 [2024-07-21 12:02:18.123152] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:19.319 [2024-07-21 12:02:18.123159] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.319 [2024-07-21 12:02:18.123166] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:19.319 [2024-07-21 12:02:18.123174] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:19.319 [2024-07-21 12:02:18.123189] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.319 [2024-07-21 12:02:18.123197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.319 [2024-07-21 12:02:18.123207] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:19.319 [2024-07-21 12:02:18.123216] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:19.319 [2024-07-21 12:02:18.123223] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:19.319 [2024-07-21 12:02:18.123234] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:19.319 [2024-07-21 12:02:18.123242] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:19.319 [2024-07-21 12:02:18.123250] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:19.319 [2024-07-21 12:02:18.123259] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:19.319 [2024-07-21 12:02:18.123276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.319 [2024-07-21 12:02:18.123286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:19.319 [2024-07-21 12:02:18.123294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:19.319 [2024-07-21 12:02:18.123303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:19.319 [2024-07-21 12:02:18.123312] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:19.319 [2024-07-21 12:02:18.123321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:19.319 [2024-07-21 12:02:18.123329] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:19.319 [2024-07-21 12:02:18.123337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:19.319 [2024-07-21 12:02:18.123345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:19.319 [2024-07-21 12:02:18.123354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:19.319 [2024-07-21 12:02:18.123362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:19.319 [2024-07-21 12:02:18.123369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:19.319 [2024-07-21 12:02:18.123380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:19.319 [2024-07-21 12:02:18.123389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:19.319 [2024-07-21 12:02:18.123398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:19.319 [2024-07-21 12:02:18.123406] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:19.319 [2024-07-21 12:02:18.123415] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.319 [2024-07-21 12:02:18.123424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:19.319 [2024-07-21 12:02:18.123432] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:19.319 [2024-07-21 12:02:18.123441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:19.319 [2024-07-21 12:02:18.123449] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:19.319 [2024-07-21 12:02:18.123457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.319 [2024-07-21 12:02:18.123474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:19.319 [2024-07-21 12:02:18.123487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:20:19.319 [2024-07-21 12:02:18.123495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.319 [2024-07-21 12:02:18.143423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.319 [2024-07-21 12:02:18.143476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.319 [2024-07-21 12:02:18.143516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.911 ms 00:20:19.319 [2024-07-21 12:02:18.143529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.319 [2024-07-21 12:02:18.143654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.319 [2024-07-21 12:02:18.143667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:19.319 [2024-07-21 12:02:18.143678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:20:19.319 [2024-07-21 12:02:18.143689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.319 [2024-07-21 12:02:18.153968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.319 [2024-07-21 12:02:18.154015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.319 [2024-07-21 12:02:18.154028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.211 ms 00:20:19.319 [2024-07-21 12:02:18.154037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.319 [2024-07-21 12:02:18.154095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.319 [2024-07-21 12:02:18.154108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.319 [2024-07-21 12:02:18.154118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:19.319 [2024-07-21 12:02:18.154138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.319 [2024-07-21 12:02:18.154596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.319 [2024-07-21 12:02:18.154607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.319 [2024-07-21 12:02:18.154617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:20:19.320 [2024-07-21 12:02:18.154625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.320 [2024-07-21 12:02:18.154746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.320 [2024-07-21 12:02:18.154759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.320 [2024-07-21 12:02:18.154769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:20:19.320 [2024-07-21 12:02:18.154781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.320 [2024-07-21 12:02:18.160776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.320 [2024-07-21 12:02:18.160823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.320 [2024-07-21 12:02:18.160835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.982 ms 00:20:19.320 [2024-07-21 12:02:18.160852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.320 [2024-07-21 12:02:18.163413] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:20:19.320 [2024-07-21 12:02:18.163452] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:19.320 [2024-07-21 12:02:18.163468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.320 [2024-07-21 12:02:18.163477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:19.320 [2024-07-21 12:02:18.163487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.523 ms 00:20:19.320 [2024-07-21 12:02:18.163495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.320 [2024-07-21 12:02:18.177636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.320 [2024-07-21 12:02:18.177676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:19.320 [2024-07-21 12:02:18.177689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.126 ms 00:20:19.320 [2024-07-21 12:02:18.177698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.320 [2024-07-21 12:02:18.179646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.320 [2024-07-21 12:02:18.179683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:19.320 [2024-07-21 12:02:18.179701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.907 ms 00:20:19.320 [2024-07-21 12:02:18.179709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.580 [2024-07-21 12:02:18.181193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.580 [2024-07-21 12:02:18.181226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:19.580 [2024-07-21 12:02:18.181237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.451 ms 00:20:19.580 [2024-07-21 12:02:18.181245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.580 [2024-07-21 12:02:18.181586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.580 [2024-07-21 12:02:18.181602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:19.580 [2024-07-21 12:02:18.181613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:20:19.580 [2024-07-21 12:02:18.181624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.580 [2024-07-21 12:02:18.203343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.580 [2024-07-21 12:02:18.203432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:19.580 [2024-07-21 12:02:18.203457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.738 ms 00:20:19.580 [2024-07-21 12:02:18.203476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.580 [2024-07-21 12:02:18.211294] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:19.580 [2024-07-21 12:02:18.214723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.580 [2024-07-21 12:02:18.214760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:19.580 [2024-07-21 12:02:18.214772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.213 ms 00:20:19.580 [2024-07-21 12:02:18.214781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.580 [2024-07-21 12:02:18.214889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.580 [2024-07-21 12:02:18.214903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:19.580 [2024-07-21 12:02:18.214914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:19.580 [2024-07-21 12:02:18.214931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.580 [2024-07-21 12:02:18.216441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.580 [2024-07-21 12:02:18.216488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:19.580 [2024-07-21 12:02:18.216500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.473 ms 00:20:19.580 [2024-07-21 12:02:18.216509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.580 [2024-07-21 12:02:18.216536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.580 [2024-07-21 12:02:18.216559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:19.580 [2024-07-21 12:02:18.216569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:19.580 [2024-07-21 12:02:18.216578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.580 [2024-07-21 12:02:18.216630] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:19.580 [2024-07-21 12:02:18.216641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.580 [2024-07-21 12:02:18.216649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:19.580 [2024-07-21 12:02:18.216660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:19.580 [2024-07-21 12:02:18.216669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.580 [2024-07-21 12:02:18.220512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.580 [2024-07-21 12:02:18.220549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:19.580 [2024-07-21 12:02:18.220560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.830 ms 00:20:19.580 [2024-07-21 12:02:18.220570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.580 [2024-07-21 12:02:18.220638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.580 [2024-07-21 12:02:18.220648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:19.580 [2024-07-21 12:02:18.220658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:19.580 [2024-07-21 12:02:18.220670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.580 [2024-07-21 12:02:18.227119] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 118.315 ms, result 0 00:20:54.914  Copying: 19/1024 [MB] (19 MBps) Copying: 49/1024 [MB] (30 MBps) Copying: 79/1024 [MB] (29 MBps) Copying: 108/1024 [MB] (29 MBps) Copying: 137/1024 [MB] (28 MBps) Copying: 166/1024 [MB] (29 MBps) Copying: 196/1024 [MB] (29 MBps) Copying: 225/1024 [MB] (28 MBps) Copying: 259/1024 [MB] (34 MBps) Copying: 291/1024 [MB] (32 MBps) Copying: 321/1024 [MB] (30 MBps) Copying: 352/1024 [MB] (30 MBps) Copying: 382/1024 [MB] (30 MBps) Copying: 412/1024 [MB] (29 MBps) Copying: 441/1024 [MB] (29 MBps) Copying: 471/1024 [MB] (29 MBps) Copying: 500/1024 [MB] (29 MBps) Copying: 529/1024 [MB] (28 MBps) Copying: 558/1024 [MB] (28 MBps) Copying: 587/1024 [MB] (29 MBps) Copying: 616/1024 [MB] (29 MBps) Copying: 646/1024 [MB] (29 MBps) Copying: 675/1024 [MB] (28 MBps) Copying: 704/1024 [MB] (29 MBps) Copying: 733/1024 [MB] (28 MBps) Copying: 762/1024 [MB] (28 MBps) Copying: 790/1024 [MB] (28 MBps) Copying: 820/1024 [MB] (29 MBps) Copying: 848/1024 [MB] (28 MBps) Copying: 876/1024 [MB] (28 MBps) Copying: 905/1024 [MB] (28 MBps) Copying: 933/1024 [MB] (28 MBps) Copying: 961/1024 [MB] (28 MBps) Copying: 989/1024 [MB] (28 MBps) Copying: 1018/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-21 12:02:53.553483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.914 [2024-07-21 12:02:53.553547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:54.914 [2024-07-21 12:02:53.553561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:54.914 [2024-07-21 12:02:53.553576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.914 [2024-07-21 12:02:53.553596] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:54.914 [2024-07-21 12:02:53.554277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.914 [2024-07-21 12:02:53.554304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:54.914 [2024-07-21 12:02:53.554319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.670 ms 00:20:54.914 [2024-07-21 12:02:53.554326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.914 [2024-07-21 12:02:53.554491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.914 [2024-07-21 12:02:53.554500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:54.914 [2024-07-21 12:02:53.554508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.152 ms 00:20:54.914 [2024-07-21 12:02:53.554514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.914 [2024-07-21 12:02:53.559921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.914 [2024-07-21 12:02:53.559966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:54.914 [2024-07-21 12:02:53.559979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.391 ms 00:20:54.914 [2024-07-21 12:02:53.559996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.914 [2024-07-21 12:02:53.565000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.914 [2024-07-21 12:02:53.565027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:54.914 [2024-07-21 12:02:53.565035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.978 ms 00:20:54.914 [2024-07-21 12:02:53.565043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.914 [2024-07-21 12:02:53.566450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.914 [2024-07-21 12:02:53.566483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:54.914 [2024-07-21 12:02:53.566492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.380 ms 00:20:54.914 [2024-07-21 12:02:53.566499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.914 [2024-07-21 12:02:53.571419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.914 [2024-07-21 12:02:53.571454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:54.914 [2024-07-21 12:02:53.571464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.906 ms 00:20:54.914 [2024-07-21 12:02:53.571477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.914 [2024-07-21 12:02:53.711501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.914 [2024-07-21 12:02:53.711536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:54.914 [2024-07-21 12:02:53.711548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 140.266 ms 00:20:54.914 [2024-07-21 12:02:53.711556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.914 [2024-07-21 12:02:53.713867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.914 [2024-07-21 12:02:53.713899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:54.914 [2024-07-21 12:02:53.713908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.302 ms 00:20:54.914 [2024-07-21 12:02:53.713915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.914 [2024-07-21 12:02:53.715518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.914 [2024-07-21 12:02:53.715550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:54.914 [2024-07-21 12:02:53.715559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.581 ms 00:20:54.914 [2024-07-21 12:02:53.715565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.914 [2024-07-21 12:02:53.716662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.914 [2024-07-21 12:02:53.716697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:54.914 [2024-07-21 12:02:53.716706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:20:54.914 [2024-07-21 12:02:53.716713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.914 [2024-07-21 12:02:53.717887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.914 [2024-07-21 12:02:53.717916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:54.914 [2024-07-21 12:02:53.717925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:20:54.914 [2024-07-21 12:02:53.717932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.914 [2024-07-21 12:02:53.717954] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:54.914 [2024-07-21 12:02:53.717968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:20:54.914 [2024-07-21 12:02:53.717977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.717985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.717993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:54.914 [2024-07-21 12:02:53.718250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:54.915 [2024-07-21 12:02:53.718718] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:54.915 [2024-07-21 12:02:53.718726] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 470802e4-3beb-44b2-b990-f45a7745233c 00:20:54.915 [2024-07-21 12:02:53.718733] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:20:54.915 [2024-07-21 12:02:53.718745] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 48832 00:20:54.915 [2024-07-21 12:02:53.718752] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 47872 00:20:54.915 [2024-07-21 12:02:53.718760] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0201 00:20:54.915 [2024-07-21 12:02:53.718766] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:54.915 [2024-07-21 12:02:53.718774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:54.915 [2024-07-21 12:02:53.718780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:54.915 [2024-07-21 12:02:53.718787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:54.915 [2024-07-21 12:02:53.718793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:54.915 [2024-07-21 12:02:53.718799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.915 [2024-07-21 12:02:53.718806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:54.915 [2024-07-21 12:02:53.718814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.848 ms 00:20:54.915 [2024-07-21 12:02:53.718820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.915 [2024-07-21 12:02:53.720714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.915 [2024-07-21 12:02:53.720738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:54.915 [2024-07-21 12:02:53.720746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.869 ms 00:20:54.915 [2024-07-21 12:02:53.720753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.915 [2024-07-21 12:02:53.720876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:54.915 [2024-07-21 12:02:53.720887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:54.915 [2024-07-21 12:02:53.720901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:20:54.915 [2024-07-21 12:02:53.720911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.915 [2024-07-21 12:02:53.726334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.915 [2024-07-21 12:02:53.726396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:54.915 [2024-07-21 12:02:53.726438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.915 [2024-07-21 12:02:53.726457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.915 [2024-07-21 12:02:53.726515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.915 [2024-07-21 12:02:53.726536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:54.915 [2024-07-21 12:02:53.726555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.915 [2024-07-21 12:02:53.726581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.915 [2024-07-21 12:02:53.726633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.915 [2024-07-21 12:02:53.726696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:54.915 [2024-07-21 12:02:53.726723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.915 [2024-07-21 12:02:53.726763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.915 [2024-07-21 12:02:53.726791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.915 [2024-07-21 12:02:53.726865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:54.915 [2024-07-21 12:02:53.726896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.915 [2024-07-21 12:02:53.726926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.915 [2024-07-21 12:02:53.739568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.915 [2024-07-21 12:02:53.739676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:54.915 [2024-07-21 12:02:53.739705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.915 [2024-07-21 12:02:53.739724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.915 [2024-07-21 12:02:53.747766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.916 [2024-07-21 12:02:53.747857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:54.916 [2024-07-21 12:02:53.747885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.916 [2024-07-21 12:02:53.747904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.916 [2024-07-21 12:02:53.747977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.916 [2024-07-21 12:02:53.748000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:54.916 [2024-07-21 12:02:53.748074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.916 [2024-07-21 12:02:53.748093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.916 [2024-07-21 12:02:53.748139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.916 [2024-07-21 12:02:53.748189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:54.916 [2024-07-21 12:02:53.748218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.916 [2024-07-21 12:02:53.748237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.916 [2024-07-21 12:02:53.748343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.916 [2024-07-21 12:02:53.748381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:54.916 [2024-07-21 12:02:53.748407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.916 [2024-07-21 12:02:53.748425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.916 [2024-07-21 12:02:53.748495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.916 [2024-07-21 12:02:53.748531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:54.916 [2024-07-21 12:02:53.748559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.916 [2024-07-21 12:02:53.748599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.916 [2024-07-21 12:02:53.748665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.916 [2024-07-21 12:02:53.748700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:54.916 [2024-07-21 12:02:53.748728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.916 [2024-07-21 12:02:53.748747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.916 [2024-07-21 12:02:53.748824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:54.916 [2024-07-21 12:02:53.748866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:54.916 [2024-07-21 12:02:53.748897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:54.916 [2024-07-21 12:02:53.748925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:54.916 [2024-07-21 12:02:53.749060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 195.921 ms, result 0 00:20:55.175 00:20:55.175 00:20:55.175 12:02:54 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:57.079 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:57.079 12:02:55 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:57.079 12:02:55 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:20:57.079 12:02:55 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:57.079 12:02:55 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:57.079 12:02:55 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:57.079 12:02:55 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 90342 00:20:57.079 12:02:55 ftl.ftl_restore -- common/autotest_common.sh@946 -- # '[' -z 90342 ']' 00:20:57.079 Process with pid 90342 is not found 00:20:57.079 Remove shared memory files 00:20:57.079 12:02:55 ftl.ftl_restore -- common/autotest_common.sh@950 -- # kill -0 90342 00:20:57.079 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (90342) - No such process 00:20:57.079 12:02:55 ftl.ftl_restore -- common/autotest_common.sh@973 -- # echo 'Process with pid 90342 is not found' 00:20:57.079 12:02:55 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:20:57.079 12:02:55 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:57.079 12:02:55 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:20:57.079 12:02:55 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:20:57.079 12:02:55 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:20:57.079 12:02:55 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:57.079 12:02:55 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:20:57.079 ************************************ 00:20:57.079 END TEST ftl_restore 00:20:57.079 ************************************ 00:20:57.079 00:20:57.079 real 2m52.222s 00:20:57.079 user 2m41.769s 00:20:57.079 sys 0m11.620s 00:20:57.079 12:02:55 ftl.ftl_restore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:57.079 12:02:55 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:57.079 12:02:55 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:57.079 12:02:55 ftl -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:20:57.079 12:02:55 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:57.079 12:02:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:57.079 ************************************ 00:20:57.079 START TEST ftl_dirty_shutdown 00:20:57.079 ************************************ 00:20:57.079 12:02:55 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:57.339 * Looking for test storage... 00:20:57.339 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=92253 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 92253 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@827 -- # '[' -z 92253 ']' 00:20:57.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:57.339 12:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:57.339 [2024-07-21 12:02:56.181471] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:20:57.339 [2024-07-21 12:02:56.181594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92253 ] 00:20:57.599 [2024-07-21 12:02:56.345258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.599 [2024-07-21 12:02:56.392345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.166 12:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:58.166 12:02:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # return 0 00:20:58.166 12:02:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:58.166 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:20:58.166 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:58.166 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:20:58.166 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:20:58.166 12:02:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:58.424 12:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:58.424 12:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:20:58.424 12:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:58.424 12:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:20:58.424 12:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:58.424 12:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:58.424 12:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:58.424 12:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:58.682 12:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:58.682 { 00:20:58.682 "name": "nvme0n1", 00:20:58.682 "aliases": [ 00:20:58.682 "3fb06fec-9765-4515-b8a0-af3aabf696a4" 00:20:58.682 ], 00:20:58.682 "product_name": "NVMe disk", 00:20:58.682 "block_size": 4096, 00:20:58.682 "num_blocks": 1310720, 00:20:58.682 "uuid": "3fb06fec-9765-4515-b8a0-af3aabf696a4", 00:20:58.682 "assigned_rate_limits": { 00:20:58.682 "rw_ios_per_sec": 0, 00:20:58.683 "rw_mbytes_per_sec": 0, 00:20:58.683 "r_mbytes_per_sec": 0, 00:20:58.683 "w_mbytes_per_sec": 0 00:20:58.683 }, 00:20:58.683 "claimed": true, 00:20:58.683 "claim_type": "read_many_write_one", 00:20:58.683 "zoned": false, 00:20:58.683 "supported_io_types": { 00:20:58.683 "read": true, 00:20:58.683 "write": true, 00:20:58.683 "unmap": true, 00:20:58.683 "write_zeroes": true, 00:20:58.683 "flush": true, 00:20:58.683 "reset": true, 00:20:58.683 "compare": true, 00:20:58.683 "compare_and_write": false, 00:20:58.683 "abort": true, 00:20:58.683 "nvme_admin": true, 00:20:58.683 "nvme_io": true 00:20:58.683 }, 00:20:58.683 "driver_specific": { 00:20:58.683 "nvme": [ 00:20:58.683 { 00:20:58.683 "pci_address": "0000:00:11.0", 00:20:58.683 "trid": { 00:20:58.683 "trtype": "PCIe", 00:20:58.683 "traddr": "0000:00:11.0" 00:20:58.683 }, 00:20:58.683 "ctrlr_data": { 00:20:58.683 "cntlid": 0, 00:20:58.683 "vendor_id": "0x1b36", 00:20:58.683 "model_number": "QEMU NVMe Ctrl", 00:20:58.683 "serial_number": "12341", 00:20:58.683 "firmware_revision": "8.0.0", 00:20:58.683 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:58.683 "oacs": { 00:20:58.683 "security": 0, 00:20:58.683 "format": 1, 00:20:58.683 "firmware": 0, 00:20:58.683 "ns_manage": 1 00:20:58.683 }, 00:20:58.683 "multi_ctrlr": false, 00:20:58.683 "ana_reporting": false 00:20:58.683 }, 00:20:58.683 "vs": { 00:20:58.683 "nvme_version": "1.4" 00:20:58.683 }, 00:20:58.683 "ns_data": { 00:20:58.683 "id": 1, 00:20:58.683 "can_share": false 00:20:58.683 } 00:20:58.683 } 00:20:58.683 ], 00:20:58.683 "mp_policy": "active_passive" 00:20:58.683 } 00:20:58.683 } 00:20:58.683 ]' 00:20:58.683 12:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:58.683 12:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:58.683 12:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:58.683 12:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:20:58.683 12:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:20:58.683 12:02:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:20:58.683 12:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:20:58.683 12:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:58.683 12:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:20:58.683 12:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:58.683 12:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:58.941 12:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=a5a067b8-f9dd-4672-b895-a4e16a8513d1 00:20:58.941 12:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:20:58.941 12:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a5a067b8-f9dd-4672-b895-a4e16a8513d1 00:20:59.199 12:02:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:59.199 12:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=e41e9d2e-00e4-4c5a-9a0d-09bb0504b22a 00:20:59.199 12:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e41e9d2e-00e4-4c5a-9a0d-09bb0504b22a 00:20:59.458 12:02:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=99512439-a031-4dff-a17e-8157ebb5dbea 00:20:59.458 12:02:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:20:59.458 12:02:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 99512439-a031-4dff-a17e-8157ebb5dbea 00:20:59.458 12:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:20:59.458 12:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:59.458 12:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=99512439-a031-4dff-a17e-8157ebb5dbea 00:20:59.458 12:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:20:59.458 12:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 99512439-a031-4dff-a17e-8157ebb5dbea 00:20:59.458 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=99512439-a031-4dff-a17e-8157ebb5dbea 00:20:59.458 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:59.458 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:59.458 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:59.458 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 99512439-a031-4dff-a17e-8157ebb5dbea 00:20:59.719 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:20:59.719 { 00:20:59.719 "name": "99512439-a031-4dff-a17e-8157ebb5dbea", 00:20:59.719 "aliases": [ 00:20:59.719 "lvs/nvme0n1p0" 00:20:59.719 ], 00:20:59.719 "product_name": "Logical Volume", 00:20:59.719 "block_size": 4096, 00:20:59.719 "num_blocks": 26476544, 00:20:59.719 "uuid": "99512439-a031-4dff-a17e-8157ebb5dbea", 00:20:59.719 "assigned_rate_limits": { 00:20:59.719 "rw_ios_per_sec": 0, 00:20:59.719 "rw_mbytes_per_sec": 0, 00:20:59.719 "r_mbytes_per_sec": 0, 00:20:59.719 "w_mbytes_per_sec": 0 00:20:59.719 }, 00:20:59.719 "claimed": false, 00:20:59.719 "zoned": false, 00:20:59.719 "supported_io_types": { 00:20:59.719 "read": true, 00:20:59.719 "write": true, 00:20:59.719 "unmap": true, 00:20:59.719 "write_zeroes": true, 00:20:59.719 "flush": false, 00:20:59.719 "reset": true, 00:20:59.719 "compare": false, 00:20:59.719 "compare_and_write": false, 00:20:59.719 "abort": false, 00:20:59.719 "nvme_admin": false, 00:20:59.719 "nvme_io": false 00:20:59.719 }, 00:20:59.719 "driver_specific": { 00:20:59.719 "lvol": { 00:20:59.719 "lvol_store_uuid": "e41e9d2e-00e4-4c5a-9a0d-09bb0504b22a", 00:20:59.719 "base_bdev": "nvme0n1", 00:20:59.719 "thin_provision": true, 00:20:59.719 "num_allocated_clusters": 0, 00:20:59.719 "snapshot": false, 00:20:59.719 "clone": false, 00:20:59.719 "esnap_clone": false 00:20:59.719 } 00:20:59.719 } 00:20:59.719 } 00:20:59.719 ]' 00:20:59.719 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:20:59.719 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:20:59.719 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:20:59.719 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:20:59.719 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:20:59.719 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:20:59.719 12:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:20:59.719 12:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:20:59.719 12:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:59.981 12:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:59.981 12:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:59.981 12:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 99512439-a031-4dff-a17e-8157ebb5dbea 00:20:59.981 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=99512439-a031-4dff-a17e-8157ebb5dbea 00:20:59.981 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:20:59.981 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:20:59.981 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:20:59.981 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 99512439-a031-4dff-a17e-8157ebb5dbea 00:21:00.239 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:00.239 { 00:21:00.239 "name": "99512439-a031-4dff-a17e-8157ebb5dbea", 00:21:00.239 "aliases": [ 00:21:00.239 "lvs/nvme0n1p0" 00:21:00.239 ], 00:21:00.239 "product_name": "Logical Volume", 00:21:00.239 "block_size": 4096, 00:21:00.239 "num_blocks": 26476544, 00:21:00.239 "uuid": "99512439-a031-4dff-a17e-8157ebb5dbea", 00:21:00.239 "assigned_rate_limits": { 00:21:00.239 "rw_ios_per_sec": 0, 00:21:00.239 "rw_mbytes_per_sec": 0, 00:21:00.239 "r_mbytes_per_sec": 0, 00:21:00.239 "w_mbytes_per_sec": 0 00:21:00.239 }, 00:21:00.239 "claimed": false, 00:21:00.239 "zoned": false, 00:21:00.239 "supported_io_types": { 00:21:00.239 "read": true, 00:21:00.239 "write": true, 00:21:00.239 "unmap": true, 00:21:00.239 "write_zeroes": true, 00:21:00.239 "flush": false, 00:21:00.239 "reset": true, 00:21:00.239 "compare": false, 00:21:00.239 "compare_and_write": false, 00:21:00.239 "abort": false, 00:21:00.239 "nvme_admin": false, 00:21:00.239 "nvme_io": false 00:21:00.239 }, 00:21:00.239 "driver_specific": { 00:21:00.239 "lvol": { 00:21:00.239 "lvol_store_uuid": "e41e9d2e-00e4-4c5a-9a0d-09bb0504b22a", 00:21:00.239 "base_bdev": "nvme0n1", 00:21:00.239 "thin_provision": true, 00:21:00.239 "num_allocated_clusters": 0, 00:21:00.239 "snapshot": false, 00:21:00.239 "clone": false, 00:21:00.239 "esnap_clone": false 00:21:00.239 } 00:21:00.239 } 00:21:00.239 } 00:21:00.239 ]' 00:21:00.239 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:00.239 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:21:00.239 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:00.239 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:21:00.239 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:21:00.239 12:02:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:21:00.239 12:02:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:21:00.239 12:02:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:00.497 12:02:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:21:00.497 12:02:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 99512439-a031-4dff-a17e-8157ebb5dbea 00:21:00.497 12:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=99512439-a031-4dff-a17e-8157ebb5dbea 00:21:00.497 12:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:21:00.497 12:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:21:00.497 12:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:21:00.497 12:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 99512439-a031-4dff-a17e-8157ebb5dbea 00:21:00.497 12:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:21:00.497 { 00:21:00.497 "name": "99512439-a031-4dff-a17e-8157ebb5dbea", 00:21:00.497 "aliases": [ 00:21:00.497 "lvs/nvme0n1p0" 00:21:00.497 ], 00:21:00.497 "product_name": "Logical Volume", 00:21:00.497 "block_size": 4096, 00:21:00.497 "num_blocks": 26476544, 00:21:00.497 "uuid": "99512439-a031-4dff-a17e-8157ebb5dbea", 00:21:00.497 "assigned_rate_limits": { 00:21:00.497 "rw_ios_per_sec": 0, 00:21:00.497 "rw_mbytes_per_sec": 0, 00:21:00.497 "r_mbytes_per_sec": 0, 00:21:00.497 "w_mbytes_per_sec": 0 00:21:00.497 }, 00:21:00.497 "claimed": false, 00:21:00.497 "zoned": false, 00:21:00.497 "supported_io_types": { 00:21:00.497 "read": true, 00:21:00.497 "write": true, 00:21:00.497 "unmap": true, 00:21:00.497 "write_zeroes": true, 00:21:00.497 "flush": false, 00:21:00.497 "reset": true, 00:21:00.497 "compare": false, 00:21:00.497 "compare_and_write": false, 00:21:00.497 "abort": false, 00:21:00.497 "nvme_admin": false, 00:21:00.497 "nvme_io": false 00:21:00.497 }, 00:21:00.497 "driver_specific": { 00:21:00.497 "lvol": { 00:21:00.497 "lvol_store_uuid": "e41e9d2e-00e4-4c5a-9a0d-09bb0504b22a", 00:21:00.497 "base_bdev": "nvme0n1", 00:21:00.497 "thin_provision": true, 00:21:00.497 "num_allocated_clusters": 0, 00:21:00.497 "snapshot": false, 00:21:00.497 "clone": false, 00:21:00.498 "esnap_clone": false 00:21:00.498 } 00:21:00.498 } 00:21:00.498 } 00:21:00.498 ]' 00:21:00.498 12:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:21:00.756 12:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:21:00.756 12:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:21:00.756 12:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # nb=26476544 00:21:00.756 12:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:21:00.756 12:02:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # echo 103424 00:21:00.756 12:02:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:21:00.756 12:02:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 99512439-a031-4dff-a17e-8157ebb5dbea --l2p_dram_limit 10' 00:21:00.756 12:02:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:21:00.756 12:02:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:21:00.756 12:02:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:21:00.756 12:02:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 99512439-a031-4dff-a17e-8157ebb5dbea --l2p_dram_limit 10 -c nvc0n1p0 00:21:00.756 [2024-07-21 12:02:59.604288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.756 [2024-07-21 12:02:59.604336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:00.756 [2024-07-21 12:02:59.604352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:00.756 [2024-07-21 12:02:59.604359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.756 [2024-07-21 12:02:59.604422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.756 [2024-07-21 12:02:59.604431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.757 [2024-07-21 12:02:59.604448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:00.757 [2024-07-21 12:02:59.604458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.757 [2024-07-21 12:02:59.604480] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:00.757 [2024-07-21 12:02:59.604716] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:00.757 [2024-07-21 12:02:59.604745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.757 [2024-07-21 12:02:59.604752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.757 [2024-07-21 12:02:59.604763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:21:00.757 [2024-07-21 12:02:59.604770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.757 [2024-07-21 12:02:59.604799] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID d39eddfe-e90a-49b5-ad2b-4e148dc65448 00:21:00.757 [2024-07-21 12:02:59.606139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.757 [2024-07-21 12:02:59.606165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:00.757 [2024-07-21 12:02:59.606177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:21:00.757 [2024-07-21 12:02:59.606187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.757 [2024-07-21 12:02:59.613349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.757 [2024-07-21 12:02:59.613387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.757 [2024-07-21 12:02:59.613398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.136 ms 00:21:00.757 [2024-07-21 12:02:59.613407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.757 [2024-07-21 12:02:59.613536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.757 [2024-07-21 12:02:59.613556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.757 [2024-07-21 12:02:59.613564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:21:00.757 [2024-07-21 12:02:59.613573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.757 [2024-07-21 12:02:59.613633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.757 [2024-07-21 12:02:59.613654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:00.757 [2024-07-21 12:02:59.613662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:00.757 [2024-07-21 12:02:59.613671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.757 [2024-07-21 12:02:59.613695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:00.757 [2024-07-21 12:02:59.615363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.757 [2024-07-21 12:02:59.615391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.757 [2024-07-21 12:02:59.615403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.676 ms 00:21:00.757 [2024-07-21 12:02:59.615410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.757 [2024-07-21 12:02:59.615443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.757 [2024-07-21 12:02:59.615452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:00.757 [2024-07-21 12:02:59.615462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:00.757 [2024-07-21 12:02:59.615478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.757 [2024-07-21 12:02:59.615501] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:00.757 [2024-07-21 12:02:59.615621] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:00.757 [2024-07-21 12:02:59.615636] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:00.757 [2024-07-21 12:02:59.615648] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:21:00.757 [2024-07-21 12:02:59.615660] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:00.757 [2024-07-21 12:02:59.615669] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:00.757 [2024-07-21 12:02:59.615679] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:00.757 [2024-07-21 12:02:59.615686] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:00.757 [2024-07-21 12:02:59.615696] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:00.757 [2024-07-21 12:02:59.615704] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:00.757 [2024-07-21 12:02:59.615712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.757 [2024-07-21 12:02:59.615719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:00.757 [2024-07-21 12:02:59.615740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:21:00.757 [2024-07-21 12:02:59.615747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.757 [2024-07-21 12:02:59.615814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.757 [2024-07-21 12:02:59.615851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:00.757 [2024-07-21 12:02:59.615862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:00.757 [2024-07-21 12:02:59.615870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.757 [2024-07-21 12:02:59.615956] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:00.757 [2024-07-21 12:02:59.615968] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:00.757 [2024-07-21 12:02:59.615978] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.757 [2024-07-21 12:02:59.615985] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.757 [2024-07-21 12:02:59.615997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:00.757 [2024-07-21 12:02:59.616003] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:00.757 [2024-07-21 12:02:59.616012] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:00.757 [2024-07-21 12:02:59.616018] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:00.757 [2024-07-21 12:02:59.616026] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:00.757 [2024-07-21 12:02:59.616033] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.757 [2024-07-21 12:02:59.616041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:00.757 [2024-07-21 12:02:59.616048] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:00.757 [2024-07-21 12:02:59.616056] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.757 [2024-07-21 12:02:59.616063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:00.757 [2024-07-21 12:02:59.616073] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:00.757 [2024-07-21 12:02:59.616079] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.757 [2024-07-21 12:02:59.616088] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:00.757 [2024-07-21 12:02:59.616093] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:00.757 [2024-07-21 12:02:59.616101] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.757 [2024-07-21 12:02:59.616107] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:00.757 [2024-07-21 12:02:59.616115] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:00.757 [2024-07-21 12:02:59.616121] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.757 [2024-07-21 12:02:59.616129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:00.757 [2024-07-21 12:02:59.616135] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:00.757 [2024-07-21 12:02:59.616143] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.757 [2024-07-21 12:02:59.616150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:00.757 [2024-07-21 12:02:59.616158] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:00.757 [2024-07-21 12:02:59.616164] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.757 [2024-07-21 12:02:59.616171] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:00.757 [2024-07-21 12:02:59.616178] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:00.757 [2024-07-21 12:02:59.616189] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.757 [2024-07-21 12:02:59.616195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:00.757 [2024-07-21 12:02:59.616203] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:00.757 [2024-07-21 12:02:59.616208] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.757 [2024-07-21 12:02:59.616216] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:00.758 [2024-07-21 12:02:59.616222] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:00.758 [2024-07-21 12:02:59.616230] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.758 [2024-07-21 12:02:59.616237] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:00.758 [2024-07-21 12:02:59.616244] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:00.758 [2024-07-21 12:02:59.616251] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.758 [2024-07-21 12:02:59.616258] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:00.758 [2024-07-21 12:02:59.616264] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:00.758 [2024-07-21 12:02:59.616272] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.758 [2024-07-21 12:02:59.616278] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:00.758 [2024-07-21 12:02:59.616286] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:00.758 [2024-07-21 12:02:59.616292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.758 [2024-07-21 12:02:59.616303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.758 [2024-07-21 12:02:59.616313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:00.758 [2024-07-21 12:02:59.616321] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:00.758 [2024-07-21 12:02:59.616327] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:00.758 [2024-07-21 12:02:59.616335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:00.758 [2024-07-21 12:02:59.616341] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:00.758 [2024-07-21 12:02:59.616349] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:00.758 [2024-07-21 12:02:59.616359] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:00.758 [2024-07-21 12:02:59.616370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.758 [2024-07-21 12:02:59.616381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:00.758 [2024-07-21 12:02:59.616389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:00.758 [2024-07-21 12:02:59.616396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:00.758 [2024-07-21 12:02:59.616404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:00.758 [2024-07-21 12:02:59.616411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:00.758 [2024-07-21 12:02:59.616420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:00.758 [2024-07-21 12:02:59.616427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:00.758 [2024-07-21 12:02:59.616437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:00.758 [2024-07-21 12:02:59.616443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:00.758 [2024-07-21 12:02:59.616451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:00.758 [2024-07-21 12:02:59.616458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:00.758 [2024-07-21 12:02:59.616465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:00.758 [2024-07-21 12:02:59.616471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:00.758 [2024-07-21 12:02:59.616480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:00.758 [2024-07-21 12:02:59.616486] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:00.758 [2024-07-21 12:02:59.616495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.758 [2024-07-21 12:02:59.616509] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:00.758 [2024-07-21 12:02:59.616518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:00.758 [2024-07-21 12:02:59.616525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:00.758 [2024-07-21 12:02:59.616534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:00.758 [2024-07-21 12:02:59.616541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.758 [2024-07-21 12:02:59.616550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:00.758 [2024-07-21 12:02:59.616557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:21:00.758 [2024-07-21 12:02:59.616570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.758 [2024-07-21 12:02:59.616607] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:00.758 [2024-07-21 12:02:59.616624] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:06.027 [2024-07-21 12:03:03.929925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:03.929980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:06.027 [2024-07-21 12:03:03.930000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4321.632 ms 00:21:06.027 [2024-07-21 12:03:03.930010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:03.940591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:03.940656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:06.027 [2024-07-21 12:03:03.940670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.517 ms 00:21:06.027 [2024-07-21 12:03:03.940680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:03.940784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:03.940801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:06.027 [2024-07-21 12:03:03.940810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:06.027 [2024-07-21 12:03:03.940831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:03.950195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:03.950236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:06.027 [2024-07-21 12:03:03.950247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.321 ms 00:21:06.027 [2024-07-21 12:03:03.950256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:03.950295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:03.950307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:06.027 [2024-07-21 12:03:03.950315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:06.027 [2024-07-21 12:03:03.950324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:03.950742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:03.950756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:06.027 [2024-07-21 12:03:03.950764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:21:06.027 [2024-07-21 12:03:03.950773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:03.950868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:03.950883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:06.027 [2024-07-21 12:03:03.950891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:21:06.027 [2024-07-21 12:03:03.950900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:03.957559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:03.957599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:06.027 [2024-07-21 12:03:03.957609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.644 ms 00:21:06.027 [2024-07-21 12:03:03.957620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:03.964739] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:06.027 [2024-07-21 12:03:03.967773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:03.967798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:06.027 [2024-07-21 12:03:03.967811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.091 ms 00:21:06.027 [2024-07-21 12:03:03.967824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:04.073100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:04.073146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:06.027 [2024-07-21 12:03:04.073215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.430 ms 00:21:06.027 [2024-07-21 12:03:04.073223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:04.073412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:04.073425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:06.027 [2024-07-21 12:03:04.073436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:21:06.027 [2024-07-21 12:03:04.073443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:04.077004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:04.077045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:06.027 [2024-07-21 12:03:04.077057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.532 ms 00:21:06.027 [2024-07-21 12:03:04.077067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:04.079691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:04.079724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:06.027 [2024-07-21 12:03:04.079736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:21:06.027 [2024-07-21 12:03:04.079743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:04.080005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:04.080019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:06.027 [2024-07-21 12:03:04.080029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:21:06.027 [2024-07-21 12:03:04.080037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:04.129279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:04.129312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:06.027 [2024-07-21 12:03:04.129325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.310 ms 00:21:06.027 [2024-07-21 12:03:04.129335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:04.133654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:04.133687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:06.027 [2024-07-21 12:03:04.133706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.285 ms 00:21:06.027 [2024-07-21 12:03:04.133713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:04.136785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:04.136835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:06.027 [2024-07-21 12:03:04.136847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.042 ms 00:21:06.027 [2024-07-21 12:03:04.136854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:04.140133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:04.140164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:06.027 [2024-07-21 12:03:04.140176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.250 ms 00:21:06.027 [2024-07-21 12:03:04.140183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:04.140222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:04.140231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:06.027 [2024-07-21 12:03:04.140241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:06.027 [2024-07-21 12:03:04.140248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:04.140318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:06.027 [2024-07-21 12:03:04.140328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:06.027 [2024-07-21 12:03:04.140338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:06.027 [2024-07-21 12:03:04.140345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:06.027 [2024-07-21 12:03:04.141327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4545.399 ms, result 0 00:21:06.027 { 00:21:06.027 "name": "ftl0", 00:21:06.027 "uuid": "d39eddfe-e90a-49b5-ad2b-4e148dc65448" 00:21:06.027 } 00:21:06.027 12:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:21:06.027 12:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:06.027 12:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:21:06.027 12:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:21:06.027 12:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:21:06.027 /dev/nbd0 00:21:06.027 12:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@865 -- # local i 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # break 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:21:06.028 1+0 records in 00:21:06.028 1+0 records out 00:21:06.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576919 s, 7.1 MB/s 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@882 -- # size=4096 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@883 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # return 0 00:21:06.028 12:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:21:06.028 [2024-07-21 12:03:04.641899] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:21:06.028 [2024-07-21 12:03:04.642525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92399 ] 00:21:06.028 [2024-07-21 12:03:04.795902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.028 [2024-07-21 12:03:04.869812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.732  Copying: 251/1024 [MB] (251 MBps) Copying: 500/1024 [MB] (249 MBps) Copying: 746/1024 [MB] (245 MBps) Copying: 988/1024 [MB] (241 MBps) Copying: 1024/1024 [MB] (average 246 MBps) 00:21:10.732 00:21:10.732 12:03:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:21:12.635 12:03:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:21:12.635 [2024-07-21 12:03:11.380628] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:21:12.635 [2024-07-21 12:03:11.380748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92471 ] 00:21:12.893 [2024-07-21 12:03:11.540846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.893 [2024-07-21 12:03:11.611957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.379  Copying: 20/1024 [MB] (20 MBps) Copying: 40/1024 [MB] (20 MBps) Copying: 62/1024 [MB] (21 MBps) Copying: 83/1024 [MB] (21 MBps) Copying: 103/1024 [MB] (19 MBps) Copying: 124/1024 [MB] (21 MBps) Copying: 145/1024 [MB] (21 MBps) Copying: 166/1024 [MB] (20 MBps) Copying: 187/1024 [MB] (21 MBps) Copying: 209/1024 [MB] (21 MBps) Copying: 230/1024 [MB] (21 MBps) Copying: 252/1024 [MB] (21 MBps) Copying: 273/1024 [MB] (20 MBps) Copying: 294/1024 [MB] (20 MBps) Copying: 315/1024 [MB] (20 MBps) Copying: 336/1024 [MB] (21 MBps) Copying: 357/1024 [MB] (21 MBps) Copying: 378/1024 [MB] (21 MBps) Copying: 399/1024 [MB] (20 MBps) Copying: 420/1024 [MB] (21 MBps) Copying: 441/1024 [MB] (21 MBps) Copying: 463/1024 [MB] (21 MBps) Copying: 485/1024 [MB] (21 MBps) Copying: 507/1024 [MB] (21 MBps) Copying: 528/1024 [MB] (21 MBps) Copying: 550/1024 [MB] (21 MBps) Copying: 572/1024 [MB] (21 MBps) Copying: 594/1024 [MB] (21 MBps) Copying: 615/1024 [MB] (21 MBps) Copying: 637/1024 [MB] (21 MBps) Copying: 658/1024 [MB] (21 MBps) Copying: 679/1024 [MB] (21 MBps) Copying: 700/1024 [MB] (20 MBps) Copying: 722/1024 [MB] (21 MBps) Copying: 743/1024 [MB] (21 MBps) Copying: 764/1024 [MB] (21 MBps) Copying: 786/1024 [MB] (21 MBps) Copying: 807/1024 [MB] (21 MBps) Copying: 828/1024 [MB] (21 MBps) Copying: 850/1024 [MB] (21 MBps) Copying: 871/1024 [MB] (21 MBps) Copying: 893/1024 [MB] (21 MBps) Copying: 915/1024 [MB] (22 MBps) Copying: 937/1024 [MB] (22 MBps) Copying: 959/1024 [MB] (21 MBps) Copying: 980/1024 [MB] (21 MBps) Copying: 1001/1024 [MB] (21 MBps) Copying: 1022/1024 [MB] (21 MBps) Copying: 1024/1024 [MB] (average 21 MBps) 00:22:01.379 00:22:01.379 12:04:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:22:01.379 12:04:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:22:01.639 12:04:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:01.639 [2024-07-21 12:04:00.442678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.639 [2024-07-21 12:04:00.442733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:01.639 [2024-07-21 12:04:00.442749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:01.639 [2024-07-21 12:04:00.442759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.639 [2024-07-21 12:04:00.442782] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:01.639 [2024-07-21 12:04:00.444108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.639 [2024-07-21 12:04:00.444126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:01.639 [2024-07-21 12:04:00.444140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.313 ms 00:22:01.639 [2024-07-21 12:04:00.444148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.639 [2024-07-21 12:04:00.446104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.639 [2024-07-21 12:04:00.446144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:01.639 [2024-07-21 12:04:00.446162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.929 ms 00:22:01.639 [2024-07-21 12:04:00.446171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.639 [2024-07-21 12:04:00.463420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.639 [2024-07-21 12:04:00.463457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:01.639 [2024-07-21 12:04:00.463475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.258 ms 00:22:01.639 [2024-07-21 12:04:00.463484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.639 [2024-07-21 12:04:00.468421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.639 [2024-07-21 12:04:00.468451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:01.639 [2024-07-21 12:04:00.468464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.902 ms 00:22:01.639 [2024-07-21 12:04:00.468471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.639 [2024-07-21 12:04:00.470315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.639 [2024-07-21 12:04:00.470350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:01.639 [2024-07-21 12:04:00.470364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.767 ms 00:22:01.639 [2024-07-21 12:04:00.470371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.639 [2024-07-21 12:04:00.475969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.639 [2024-07-21 12:04:00.476002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:01.639 [2024-07-21 12:04:00.476026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.573 ms 00:22:01.639 [2024-07-21 12:04:00.476034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.639 [2024-07-21 12:04:00.476152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.639 [2024-07-21 12:04:00.476162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:01.639 [2024-07-21 12:04:00.476172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:22:01.639 [2024-07-21 12:04:00.476180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.639 [2024-07-21 12:04:00.478508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.639 [2024-07-21 12:04:00.478540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:01.639 [2024-07-21 12:04:00.478552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.312 ms 00:22:01.639 [2024-07-21 12:04:00.478559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.639 [2024-07-21 12:04:00.480170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.639 [2024-07-21 12:04:00.480201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:01.639 [2024-07-21 12:04:00.480215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.578 ms 00:22:01.639 [2024-07-21 12:04:00.480222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.639 [2024-07-21 12:04:00.481428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.639 [2024-07-21 12:04:00.481459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:01.639 [2024-07-21 12:04:00.481470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.177 ms 00:22:01.639 [2024-07-21 12:04:00.481477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.639 [2024-07-21 12:04:00.482440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.639 [2024-07-21 12:04:00.482471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:01.639 [2024-07-21 12:04:00.482483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.911 ms 00:22:01.639 [2024-07-21 12:04:00.482490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.639 [2024-07-21 12:04:00.482517] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:01.639 [2024-07-21 12:04:00.482532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:01.639 [2024-07-21 12:04:00.482562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:01.639 [2024-07-21 12:04:00.482570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:01.639 [2024-07-21 12:04:00.482581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:01.639 [2024-07-21 12:04:00.482589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:01.639 [2024-07-21 12:04:00.482601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:01.639 [2024-07-21 12:04:00.482609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:01.639 [2024-07-21 12:04:00.482618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:01.639 [2024-07-21 12:04:00.482626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:01.639 [2024-07-21 12:04:00.482636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.482996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:01.640 [2024-07-21 12:04:00.483522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:01.641 [2024-07-21 12:04:00.483532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:01.641 [2024-07-21 12:04:00.483547] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:01.641 [2024-07-21 12:04:00.483569] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d39eddfe-e90a-49b5-ad2b-4e148dc65448 00:22:01.641 [2024-07-21 12:04:00.483577] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:01.641 [2024-07-21 12:04:00.483588] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:01.641 [2024-07-21 12:04:00.483595] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:01.641 [2024-07-21 12:04:00.483605] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:01.641 [2024-07-21 12:04:00.483613] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:01.641 [2024-07-21 12:04:00.483623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:01.641 [2024-07-21 12:04:00.483631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:01.641 [2024-07-21 12:04:00.483639] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:01.641 [2024-07-21 12:04:00.483645] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:01.641 [2024-07-21 12:04:00.483655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.641 [2024-07-21 12:04:00.483663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:01.641 [2024-07-21 12:04:00.483674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.142 ms 00:22:01.641 [2024-07-21 12:04:00.483684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.641 [2024-07-21 12:04:00.486607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.641 [2024-07-21 12:04:00.486625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:01.641 [2024-07-21 12:04:00.486638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.904 ms 00:22:01.641 [2024-07-21 12:04:00.486646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.641 [2024-07-21 12:04:00.486922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:01.641 [2024-07-21 12:04:00.486955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:01.641 [2024-07-21 12:04:00.486980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:22:01.641 [2024-07-21 12:04:00.486999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.641 [2024-07-21 12:04:00.497883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.641 [2024-07-21 12:04:00.497954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:01.641 [2024-07-21 12:04:00.497985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.641 [2024-07-21 12:04:00.498006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.641 [2024-07-21 12:04:00.498077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.641 [2024-07-21 12:04:00.498102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:01.641 [2024-07-21 12:04:00.498124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.641 [2024-07-21 12:04:00.498142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.641 [2024-07-21 12:04:00.498239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.641 [2024-07-21 12:04:00.498287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:01.641 [2024-07-21 12:04:00.498329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.641 [2024-07-21 12:04:00.498358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.641 [2024-07-21 12:04:00.498384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.641 [2024-07-21 12:04:00.498392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:01.641 [2024-07-21 12:04:00.498405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.641 [2024-07-21 12:04:00.498421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.901 [2024-07-21 12:04:00.522946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.901 [2024-07-21 12:04:00.522992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:01.901 [2024-07-21 12:04:00.523006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.901 [2024-07-21 12:04:00.523013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.901 [2024-07-21 12:04:00.537247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.901 [2024-07-21 12:04:00.537294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:01.901 [2024-07-21 12:04:00.537312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.901 [2024-07-21 12:04:00.537324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.901 [2024-07-21 12:04:00.537430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.901 [2024-07-21 12:04:00.537439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:01.901 [2024-07-21 12:04:00.537454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.901 [2024-07-21 12:04:00.537462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.901 [2024-07-21 12:04:00.537522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.901 [2024-07-21 12:04:00.537531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:01.901 [2024-07-21 12:04:00.537542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.901 [2024-07-21 12:04:00.537550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.901 [2024-07-21 12:04:00.537639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.901 [2024-07-21 12:04:00.537651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:01.901 [2024-07-21 12:04:00.537661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.901 [2024-07-21 12:04:00.537668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.901 [2024-07-21 12:04:00.537707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.901 [2024-07-21 12:04:00.537716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:01.901 [2024-07-21 12:04:00.537726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.901 [2024-07-21 12:04:00.537733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.901 [2024-07-21 12:04:00.537785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.901 [2024-07-21 12:04:00.537794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:01.901 [2024-07-21 12:04:00.537807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.901 [2024-07-21 12:04:00.537815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.901 [2024-07-21 12:04:00.537914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:01.901 [2024-07-21 12:04:00.537924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:01.901 [2024-07-21 12:04:00.537935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:01.901 [2024-07-21 12:04:00.537942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:01.901 [2024-07-21 12:04:00.538116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 95.563 ms, result 0 00:22:01.901 true 00:22:01.901 12:04:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 92253 00:22:01.901 12:04:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid92253 00:22:01.901 12:04:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:22:01.901 [2024-07-21 12:04:00.659774] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:01.901 [2024-07-21 12:04:00.659890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92982 ] 00:22:02.161 [2024-07-21 12:04:00.824337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.161 [2024-07-21 12:04:00.895190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.618  Copying: 249/1024 [MB] (249 MBps) Copying: 492/1024 [MB] (243 MBps) Copying: 740/1024 [MB] (247 MBps) Copying: 980/1024 [MB] (240 MBps) Copying: 1024/1024 [MB] (average 245 MBps) 00:22:06.618 00:22:06.618 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 92253 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:22:06.618 12:04:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:06.878 [2024-07-21 12:04:05.491108] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:06.878 [2024-07-21 12:04:05.491310] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93033 ] 00:22:06.878 [2024-07-21 12:04:05.654755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.878 [2024-07-21 12:04:05.700145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.138 [2024-07-21 12:04:05.800692] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:07.138 [2024-07-21 12:04:05.800859] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:07.138 [2024-07-21 12:04:05.861053] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:07.138 [2024-07-21 12:04:05.861422] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:07.138 [2024-07-21 12:04:05.861631] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:07.404 [2024-07-21 12:04:06.145142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.404 [2024-07-21 12:04:06.145279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:07.404 [2024-07-21 12:04:06.145322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:07.404 [2024-07-21 12:04:06.145331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.404 [2024-07-21 12:04:06.145394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.404 [2024-07-21 12:04:06.145405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:07.404 [2024-07-21 12:04:06.145413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:07.404 [2024-07-21 12:04:06.145420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.404 [2024-07-21 12:04:06.145440] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:07.404 [2024-07-21 12:04:06.145673] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:07.404 [2024-07-21 12:04:06.145691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.404 [2024-07-21 12:04:06.145701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:07.404 [2024-07-21 12:04:06.145709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:22:07.404 [2024-07-21 12:04:06.145716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.404 [2024-07-21 12:04:06.147089] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:07.404 [2024-07-21 12:04:06.149559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.404 [2024-07-21 12:04:06.149602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:07.405 [2024-07-21 12:04:06.149613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.476 ms 00:22:07.405 [2024-07-21 12:04:06.149620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.405 [2024-07-21 12:04:06.149672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.405 [2024-07-21 12:04:06.149681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:07.405 [2024-07-21 12:04:06.149692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:07.405 [2024-07-21 12:04:06.149699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.405 [2024-07-21 12:04:06.156405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.405 [2024-07-21 12:04:06.156436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:07.405 [2024-07-21 12:04:06.156446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.675 ms 00:22:07.405 [2024-07-21 12:04:06.156453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.405 [2024-07-21 12:04:06.156534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.405 [2024-07-21 12:04:06.156546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:07.405 [2024-07-21 12:04:06.156554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:07.405 [2024-07-21 12:04:06.156566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.405 [2024-07-21 12:04:06.156624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.405 [2024-07-21 12:04:06.156634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:07.405 [2024-07-21 12:04:06.156650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:07.405 [2024-07-21 12:04:06.156664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.405 [2024-07-21 12:04:06.156687] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:07.405 [2024-07-21 12:04:06.158332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.405 [2024-07-21 12:04:06.158366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:07.405 [2024-07-21 12:04:06.158379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.654 ms 00:22:07.405 [2024-07-21 12:04:06.158386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.405 [2024-07-21 12:04:06.158423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.405 [2024-07-21 12:04:06.158438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:07.405 [2024-07-21 12:04:06.158447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:07.405 [2024-07-21 12:04:06.158461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.405 [2024-07-21 12:04:06.158489] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:07.405 [2024-07-21 12:04:06.158509] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:07.405 [2024-07-21 12:04:06.158544] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:07.405 [2024-07-21 12:04:06.158569] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:07.405 [2024-07-21 12:04:06.158659] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:07.405 [2024-07-21 12:04:06.158669] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:07.405 [2024-07-21 12:04:06.158678] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:07.405 [2024-07-21 12:04:06.158687] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:07.405 [2024-07-21 12:04:06.158696] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:07.405 [2024-07-21 12:04:06.158703] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:07.405 [2024-07-21 12:04:06.158710] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:07.405 [2024-07-21 12:04:06.158716] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:07.405 [2024-07-21 12:04:06.158732] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:07.405 [2024-07-21 12:04:06.158739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.405 [2024-07-21 12:04:06.158746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:07.405 [2024-07-21 12:04:06.158754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:22:07.405 [2024-07-21 12:04:06.158769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.405 [2024-07-21 12:04:06.158851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.405 [2024-07-21 12:04:06.158860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:07.405 [2024-07-21 12:04:06.158869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:07.405 [2024-07-21 12:04:06.158875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.405 [2024-07-21 12:04:06.158957] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:07.405 [2024-07-21 12:04:06.158967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:07.405 [2024-07-21 12:04:06.158975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:07.405 [2024-07-21 12:04:06.158982] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.405 [2024-07-21 12:04:06.158997] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:07.405 [2024-07-21 12:04:06.159004] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:07.405 [2024-07-21 12:04:06.159028] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:07.405 [2024-07-21 12:04:06.159036] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:07.405 [2024-07-21 12:04:06.159053] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:07.405 [2024-07-21 12:04:06.159060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:07.405 [2024-07-21 12:04:06.159067] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:07.405 [2024-07-21 12:04:06.159079] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:07.405 [2024-07-21 12:04:06.159090] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:07.405 [2024-07-21 12:04:06.159098] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:07.405 [2024-07-21 12:04:06.159104] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:07.405 [2024-07-21 12:04:06.159111] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.405 [2024-07-21 12:04:06.159117] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:07.405 [2024-07-21 12:04:06.159124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:07.405 [2024-07-21 12:04:06.159131] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.405 [2024-07-21 12:04:06.159138] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:07.405 [2024-07-21 12:04:06.159150] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:07.405 [2024-07-21 12:04:06.159157] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.405 [2024-07-21 12:04:06.159164] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:07.405 [2024-07-21 12:04:06.159170] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:07.405 [2024-07-21 12:04:06.159177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.405 [2024-07-21 12:04:06.159183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:07.405 [2024-07-21 12:04:06.159190] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:07.405 [2024-07-21 12:04:06.159196] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.405 [2024-07-21 12:04:06.159209] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:07.405 [2024-07-21 12:04:06.159219] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:07.405 [2024-07-21 12:04:06.159233] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.405 [2024-07-21 12:04:06.159239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:07.405 [2024-07-21 12:04:06.159246] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:07.405 [2024-07-21 12:04:06.159252] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:07.405 [2024-07-21 12:04:06.159260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:07.405 [2024-07-21 12:04:06.159266] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:07.405 [2024-07-21 12:04:06.159285] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:07.405 [2024-07-21 12:04:06.159291] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:07.405 [2024-07-21 12:04:06.159297] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:07.405 [2024-07-21 12:04:06.159303] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.405 [2024-07-21 12:04:06.159309] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:07.405 [2024-07-21 12:04:06.159315] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:07.405 [2024-07-21 12:04:06.159321] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.405 [2024-07-21 12:04:06.159329] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:07.405 [2024-07-21 12:04:06.159355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:07.405 [2024-07-21 12:04:06.159362] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:07.405 [2024-07-21 12:04:06.159369] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.405 [2024-07-21 12:04:06.159377] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:07.405 [2024-07-21 12:04:06.159384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:07.405 [2024-07-21 12:04:06.159391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:07.405 [2024-07-21 12:04:06.159398] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:07.405 [2024-07-21 12:04:06.159404] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:07.406 [2024-07-21 12:04:06.159411] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:07.406 [2024-07-21 12:04:06.159418] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:07.406 [2024-07-21 12:04:06.159428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:07.406 [2024-07-21 12:04:06.159436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:07.406 [2024-07-21 12:04:06.159443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:07.406 [2024-07-21 12:04:06.159450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:07.406 [2024-07-21 12:04:06.159458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:07.406 [2024-07-21 12:04:06.159466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:07.406 [2024-07-21 12:04:06.159475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:07.406 [2024-07-21 12:04:06.159482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:07.406 [2024-07-21 12:04:06.159490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:07.406 [2024-07-21 12:04:06.159497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:07.406 [2024-07-21 12:04:06.159504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:07.406 [2024-07-21 12:04:06.159511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:07.406 [2024-07-21 12:04:06.159518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:07.406 [2024-07-21 12:04:06.159524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:07.406 [2024-07-21 12:04:06.159532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:07.406 [2024-07-21 12:04:06.159538] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:07.406 [2024-07-21 12:04:06.159548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:07.406 [2024-07-21 12:04:06.159556] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:07.406 [2024-07-21 12:04:06.159564] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:07.406 [2024-07-21 12:04:06.159570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:07.406 [2024-07-21 12:04:06.159577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:07.406 [2024-07-21 12:04:06.159587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.159597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:07.406 [2024-07-21 12:04:06.159605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.681 ms 00:22:07.406 [2024-07-21 12:04:06.159612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.182313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.182355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:07.406 [2024-07-21 12:04:06.182370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.700 ms 00:22:07.406 [2024-07-21 12:04:06.182382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.182478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.182489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:07.406 [2024-07-21 12:04:06.182499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:22:07.406 [2024-07-21 12:04:06.182527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.192517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.192557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:07.406 [2024-07-21 12:04:06.192569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.946 ms 00:22:07.406 [2024-07-21 12:04:06.192578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.192616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.192626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:07.406 [2024-07-21 12:04:06.192635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:07.406 [2024-07-21 12:04:06.192643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.193118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.193131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:07.406 [2024-07-21 12:04:06.193140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:22:07.406 [2024-07-21 12:04:06.193148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.193269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.193297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:07.406 [2024-07-21 12:04:06.193315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:22:07.406 [2024-07-21 12:04:06.193323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.199302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.199336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:07.406 [2024-07-21 12:04:06.199346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.967 ms 00:22:07.406 [2024-07-21 12:04:06.199353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.201965] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:07.406 [2024-07-21 12:04:06.201996] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:07.406 [2024-07-21 12:04:06.202008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.202027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:07.406 [2024-07-21 12:04:06.202036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.562 ms 00:22:07.406 [2024-07-21 12:04:06.202043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.215803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.215847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:07.406 [2024-07-21 12:04:06.215880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.747 ms 00:22:07.406 [2024-07-21 12:04:06.215888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.217778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.217814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:07.406 [2024-07-21 12:04:06.217835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.854 ms 00:22:07.406 [2024-07-21 12:04:06.217842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.219343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.219373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:07.406 [2024-07-21 12:04:06.219383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:22:07.406 [2024-07-21 12:04:06.219390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.219673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.219686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:07.406 [2024-07-21 12:04:06.219694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:22:07.406 [2024-07-21 12:04:06.219701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.242376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.242441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:07.406 [2024-07-21 12:04:06.242467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.701 ms 00:22:07.406 [2024-07-21 12:04:06.242475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.248596] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:07.406 [2024-07-21 12:04:06.251607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.251635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:07.406 [2024-07-21 12:04:06.251645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.097 ms 00:22:07.406 [2024-07-21 12:04:06.251653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.251721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.251731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:07.406 [2024-07-21 12:04:06.251743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:07.406 [2024-07-21 12:04:06.251766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.251901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.251938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:07.406 [2024-07-21 12:04:06.251959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:22:07.406 [2024-07-21 12:04:06.251976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.406 [2024-07-21 12:04:06.252026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.406 [2024-07-21 12:04:06.252063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:07.406 [2024-07-21 12:04:06.252083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:07.406 [2024-07-21 12:04:06.252112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.407 [2024-07-21 12:04:06.252156] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:07.407 [2024-07-21 12:04:06.252232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.407 [2024-07-21 12:04:06.252241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:07.407 [2024-07-21 12:04:06.252253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:22:07.407 [2024-07-21 12:04:06.252268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.407 [2024-07-21 12:04:06.255785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.407 [2024-07-21 12:04:06.255842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:07.407 [2024-07-21 12:04:06.255870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.494 ms 00:22:07.407 [2024-07-21 12:04:06.255878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.407 [2024-07-21 12:04:06.255947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.407 [2024-07-21 12:04:06.255956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:07.407 [2024-07-21 12:04:06.255964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:22:07.407 [2024-07-21 12:04:06.255971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.407 [2024-07-21 12:04:06.257014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 111.663 ms, result 0 00:22:43.983  Copying: 28/1024 [MB] (28 MBps) Copying: 57/1024 [MB] (28 MBps) Copying: 86/1024 [MB] (28 MBps) Copying: 114/1024 [MB] (28 MBps) Copying: 143/1024 [MB] (29 MBps) Copying: 172/1024 [MB] (28 MBps) Copying: 201/1024 [MB] (28 MBps) Copying: 230/1024 [MB] (28 MBps) Copying: 258/1024 [MB] (28 MBps) Copying: 287/1024 [MB] (28 MBps) Copying: 316/1024 [MB] (28 MBps) Copying: 345/1024 [MB] (28 MBps) Copying: 373/1024 [MB] (28 MBps) Copying: 402/1024 [MB] (28 MBps) Copying: 431/1024 [MB] (28 MBps) Copying: 459/1024 [MB] (28 MBps) Copying: 487/1024 [MB] (28 MBps) Copying: 515/1024 [MB] (28 MBps) Copying: 544/1024 [MB] (28 MBps) Copying: 572/1024 [MB] (28 MBps) Copying: 600/1024 [MB] (28 MBps) Copying: 629/1024 [MB] (28 MBps) Copying: 658/1024 [MB] (28 MBps) Copying: 686/1024 [MB] (28 MBps) Copying: 715/1024 [MB] (28 MBps) Copying: 744/1024 [MB] (28 MBps) Copying: 772/1024 [MB] (28 MBps) Copying: 801/1024 [MB] (28 MBps) Copying: 829/1024 [MB] (28 MBps) Copying: 858/1024 [MB] (28 MBps) Copying: 886/1024 [MB] (28 MBps) Copying: 915/1024 [MB] (28 MBps) Copying: 944/1024 [MB] (28 MBps) Copying: 972/1024 [MB] (28 MBps) Copying: 1000/1024 [MB] (28 MBps) Copying: 1023/1024 [MB] (22 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-21 12:04:42.754138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.983 [2024-07-21 12:04:42.754210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:43.983 [2024-07-21 12:04:42.754225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:43.983 [2024-07-21 12:04:42.754237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.983 [2024-07-21 12:04:42.756441] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:43.983 [2024-07-21 12:04:42.758337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.983 [2024-07-21 12:04:42.758404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:43.983 [2024-07-21 12:04:42.758432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.822 ms 00:22:43.983 [2024-07-21 12:04:42.758452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.983 [2024-07-21 12:04:42.767135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.983 [2024-07-21 12:04:42.767205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:43.983 [2024-07-21 12:04:42.767256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.889 ms 00:22:43.983 [2024-07-21 12:04:42.767276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.983 [2024-07-21 12:04:42.788750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.983 [2024-07-21 12:04:42.788830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:43.983 [2024-07-21 12:04:42.788876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.481 ms 00:22:43.983 [2024-07-21 12:04:42.788897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.983 [2024-07-21 12:04:42.794240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.983 [2024-07-21 12:04:42.794303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:43.983 [2024-07-21 12:04:42.794347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.310 ms 00:22:43.983 [2024-07-21 12:04:42.794367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.983 [2024-07-21 12:04:42.795989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.983 [2024-07-21 12:04:42.796056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:43.983 [2024-07-21 12:04:42.796083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.576 ms 00:22:43.983 [2024-07-21 12:04:42.796103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:43.983 [2024-07-21 12:04:42.800911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:43.983 [2024-07-21 12:04:42.800976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:43.983 [2024-07-21 12:04:42.801020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.777 ms 00:22:43.983 [2024-07-21 12:04:42.801040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.244 [2024-07-21 12:04:42.908005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.244 [2024-07-21 12:04:42.908138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:44.244 [2024-07-21 12:04:42.908171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.123 ms 00:22:44.244 [2024-07-21 12:04:42.908204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.244 [2024-07-21 12:04:42.910947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.244 [2024-07-21 12:04:42.911019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:44.244 [2024-07-21 12:04:42.911047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.709 ms 00:22:44.244 [2024-07-21 12:04:42.911066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.244 [2024-07-21 12:04:42.912774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.244 [2024-07-21 12:04:42.912856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:44.244 [2024-07-21 12:04:42.912900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:22:44.244 [2024-07-21 12:04:42.912937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.244 [2024-07-21 12:04:42.914233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.244 [2024-07-21 12:04:42.914298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:44.244 [2024-07-21 12:04:42.914331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:22:44.244 [2024-07-21 12:04:42.914351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.244 [2024-07-21 12:04:42.915535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.244 [2024-07-21 12:04:42.915602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:44.244 [2024-07-21 12:04:42.915634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.111 ms 00:22:44.244 [2024-07-21 12:04:42.915654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.244 [2024-07-21 12:04:42.915697] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:44.244 [2024-07-21 12:04:42.915748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 108032 / 261120 wr_cnt: 1 state: open 00:22:44.244 [2024-07-21 12:04:42.915807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.915861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.915900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.915936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.915973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:44.244 [2024-07-21 12:04:42.916859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.916992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:44.245 [2024-07-21 12:04:42.917408] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:44.245 [2024-07-21 12:04:42.917417] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d39eddfe-e90a-49b5-ad2b-4e148dc65448 00:22:44.245 [2024-07-21 12:04:42.917425] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 108032 00:22:44.245 [2024-07-21 12:04:42.917432] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 108992 00:22:44.245 [2024-07-21 12:04:42.917440] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 108032 00:22:44.245 [2024-07-21 12:04:42.917453] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0089 00:22:44.245 [2024-07-21 12:04:42.917460] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:44.245 [2024-07-21 12:04:42.917468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:44.245 [2024-07-21 12:04:42.917475] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:44.245 [2024-07-21 12:04:42.917482] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:44.245 [2024-07-21 12:04:42.917488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:44.245 [2024-07-21 12:04:42.917496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.245 [2024-07-21 12:04:42.917504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:44.245 [2024-07-21 12:04:42.917515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.804 ms 00:22:44.245 [2024-07-21 12:04:42.917523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.245 [2024-07-21 12:04:42.919262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.245 [2024-07-21 12:04:42.919277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:44.245 [2024-07-21 12:04:42.919286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.723 ms 00:22:44.245 [2024-07-21 12:04:42.919305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.245 [2024-07-21 12:04:42.919435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:44.245 [2024-07-21 12:04:42.919447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:44.245 [2024-07-21 12:04:42.919455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:22:44.245 [2024-07-21 12:04:42.919470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.245 [2024-07-21 12:04:42.924950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.245 [2024-07-21 12:04:42.925007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:44.245 [2024-07-21 12:04:42.925033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.245 [2024-07-21 12:04:42.925052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.245 [2024-07-21 12:04:42.925122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.245 [2024-07-21 12:04:42.925164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:44.245 [2024-07-21 12:04:42.925183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.245 [2024-07-21 12:04:42.925202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.245 [2024-07-21 12:04:42.925258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.245 [2024-07-21 12:04:42.925285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:44.245 [2024-07-21 12:04:42.925321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.245 [2024-07-21 12:04:42.925340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.245 [2024-07-21 12:04:42.925383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.245 [2024-07-21 12:04:42.925415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:44.245 [2024-07-21 12:04:42.925443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.245 [2024-07-21 12:04:42.925493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.245 [2024-07-21 12:04:42.938828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.245 [2024-07-21 12:04:42.938947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:44.245 [2024-07-21 12:04:42.938991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.245 [2024-07-21 12:04:42.939012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.245 [2024-07-21 12:04:42.947171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.245 [2024-07-21 12:04:42.947254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:44.245 [2024-07-21 12:04:42.947282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.245 [2024-07-21 12:04:42.947317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.245 [2024-07-21 12:04:42.947381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.245 [2024-07-21 12:04:42.947414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:44.245 [2024-07-21 12:04:42.947435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.246 [2024-07-21 12:04:42.947459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.246 [2024-07-21 12:04:42.947493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.246 [2024-07-21 12:04:42.947562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:44.246 [2024-07-21 12:04:42.947594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.246 [2024-07-21 12:04:42.947620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.246 [2024-07-21 12:04:42.947724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.246 [2024-07-21 12:04:42.947762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:44.246 [2024-07-21 12:04:42.947789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.246 [2024-07-21 12:04:42.947809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.246 [2024-07-21 12:04:42.947932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.246 [2024-07-21 12:04:42.947977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:44.246 [2024-07-21 12:04:42.948007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.246 [2024-07-21 12:04:42.948025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.246 [2024-07-21 12:04:42.948111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.246 [2024-07-21 12:04:42.948140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:44.246 [2024-07-21 12:04:42.948167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.246 [2024-07-21 12:04:42.948186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.246 [2024-07-21 12:04:42.948238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:44.246 [2024-07-21 12:04:42.948266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:44.246 [2024-07-21 12:04:42.948295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:44.246 [2024-07-21 12:04:42.948321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:44.246 [2024-07-21 12:04:42.948444] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 196.480 ms, result 0 00:22:45.622 00:22:45.622 00:22:45.622 12:04:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:22:47.523 12:04:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:47.523 [2024-07-21 12:04:46.044738] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:22:47.523 [2024-07-21 12:04:46.044865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93448 ] 00:22:47.523 [2024-07-21 12:04:46.204238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.523 [2024-07-21 12:04:46.251225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.523 [2024-07-21 12:04:46.352225] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:47.523 [2024-07-21 12:04:46.352313] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:47.783 [2024-07-21 12:04:46.499443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.783 [2024-07-21 12:04:46.499502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:47.783 [2024-07-21 12:04:46.499515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:47.783 [2024-07-21 12:04:46.499522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.783 [2024-07-21 12:04:46.499584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.783 [2024-07-21 12:04:46.499596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:47.783 [2024-07-21 12:04:46.499603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:22:47.783 [2024-07-21 12:04:46.499614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.783 [2024-07-21 12:04:46.499632] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:47.783 [2024-07-21 12:04:46.499869] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:47.783 [2024-07-21 12:04:46.499888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.783 [2024-07-21 12:04:46.499898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:47.783 [2024-07-21 12:04:46.499907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:22:47.783 [2024-07-21 12:04:46.499914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.783 [2024-07-21 12:04:46.501268] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:47.783 [2024-07-21 12:04:46.503842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.783 [2024-07-21 12:04:46.503876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:47.783 [2024-07-21 12:04:46.503891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.581 ms 00:22:47.783 [2024-07-21 12:04:46.503898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.783 [2024-07-21 12:04:46.503959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.783 [2024-07-21 12:04:46.503968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:47.783 [2024-07-21 12:04:46.503977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:47.783 [2024-07-21 12:04:46.503984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.783 [2024-07-21 12:04:46.510627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.783 [2024-07-21 12:04:46.510655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:47.783 [2024-07-21 12:04:46.510663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.607 ms 00:22:47.783 [2024-07-21 12:04:46.510670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.783 [2024-07-21 12:04:46.510770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.783 [2024-07-21 12:04:46.510781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:47.783 [2024-07-21 12:04:46.510789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:47.783 [2024-07-21 12:04:46.510795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.783 [2024-07-21 12:04:46.510871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.783 [2024-07-21 12:04:46.510885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:47.783 [2024-07-21 12:04:46.510895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:47.783 [2024-07-21 12:04:46.510902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.783 [2024-07-21 12:04:46.510934] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:47.783 [2024-07-21 12:04:46.512552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.783 [2024-07-21 12:04:46.512581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:47.783 [2024-07-21 12:04:46.512600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.629 ms 00:22:47.783 [2024-07-21 12:04:46.512615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.783 [2024-07-21 12:04:46.512646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.783 [2024-07-21 12:04:46.512654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:47.783 [2024-07-21 12:04:46.512664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:47.783 [2024-07-21 12:04:46.512684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.783 [2024-07-21 12:04:46.512709] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:47.783 [2024-07-21 12:04:46.512729] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:47.783 [2024-07-21 12:04:46.512759] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:47.783 [2024-07-21 12:04:46.512772] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:22:47.783 [2024-07-21 12:04:46.512859] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:47.783 [2024-07-21 12:04:46.512874] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:47.783 [2024-07-21 12:04:46.512883] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:22:47.783 [2024-07-21 12:04:46.512892] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:47.783 [2024-07-21 12:04:46.512902] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:47.783 [2024-07-21 12:04:46.512910] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:47.783 [2024-07-21 12:04:46.512916] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:47.783 [2024-07-21 12:04:46.512923] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:47.783 [2024-07-21 12:04:46.512930] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:47.783 [2024-07-21 12:04:46.512938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.783 [2024-07-21 12:04:46.512945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:47.783 [2024-07-21 12:04:46.512952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:22:47.783 [2024-07-21 12:04:46.512960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.783 [2024-07-21 12:04:46.513032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.783 [2024-07-21 12:04:46.513040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:47.783 [2024-07-21 12:04:46.513048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:22:47.783 [2024-07-21 12:04:46.513054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.784 [2024-07-21 12:04:46.513136] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:47.784 [2024-07-21 12:04:46.513152] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:47.784 [2024-07-21 12:04:46.513159] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:47.784 [2024-07-21 12:04:46.513187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.784 [2024-07-21 12:04:46.513198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:47.784 [2024-07-21 12:04:46.513204] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:47.784 [2024-07-21 12:04:46.513210] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:47.784 [2024-07-21 12:04:46.513217] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:47.784 [2024-07-21 12:04:46.513224] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:47.784 [2024-07-21 12:04:46.513230] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:47.784 [2024-07-21 12:04:46.513236] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:47.784 [2024-07-21 12:04:46.513244] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:47.784 [2024-07-21 12:04:46.513259] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:47.784 [2024-07-21 12:04:46.513266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:47.784 [2024-07-21 12:04:46.513272] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:47.784 [2024-07-21 12:04:46.513279] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.784 [2024-07-21 12:04:46.513285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:47.784 [2024-07-21 12:04:46.513291] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:47.784 [2024-07-21 12:04:46.513298] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.784 [2024-07-21 12:04:46.513307] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:47.784 [2024-07-21 12:04:46.513313] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:47.784 [2024-07-21 12:04:46.513319] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.784 [2024-07-21 12:04:46.513325] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:47.784 [2024-07-21 12:04:46.513331] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:47.784 [2024-07-21 12:04:46.513337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.784 [2024-07-21 12:04:46.513343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:47.784 [2024-07-21 12:04:46.513350] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:47.784 [2024-07-21 12:04:46.513355] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.784 [2024-07-21 12:04:46.513361] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:47.784 [2024-07-21 12:04:46.513367] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:47.784 [2024-07-21 12:04:46.513373] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.784 [2024-07-21 12:04:46.513379] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:47.784 [2024-07-21 12:04:46.513385] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:47.784 [2024-07-21 12:04:46.513391] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:47.784 [2024-07-21 12:04:46.513397] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:47.784 [2024-07-21 12:04:46.513405] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:47.784 [2024-07-21 12:04:46.513411] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:47.784 [2024-07-21 12:04:46.513417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:47.784 [2024-07-21 12:04:46.513423] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:47.784 [2024-07-21 12:04:46.513429] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.784 [2024-07-21 12:04:46.513435] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:47.784 [2024-07-21 12:04:46.513441] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:47.784 [2024-07-21 12:04:46.513447] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.784 [2024-07-21 12:04:46.513454] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:47.784 [2024-07-21 12:04:46.513461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:47.784 [2024-07-21 12:04:46.513468] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:47.784 [2024-07-21 12:04:46.513475] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.784 [2024-07-21 12:04:46.513482] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:47.784 [2024-07-21 12:04:46.513488] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:47.784 [2024-07-21 12:04:46.513494] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:47.784 [2024-07-21 12:04:46.513501] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:47.784 [2024-07-21 12:04:46.513509] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:47.784 [2024-07-21 12:04:46.513516] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:47.784 [2024-07-21 12:04:46.513523] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:47.784 [2024-07-21 12:04:46.513532] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:47.784 [2024-07-21 12:04:46.513540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:47.784 [2024-07-21 12:04:46.513547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:47.784 [2024-07-21 12:04:46.513554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:47.784 [2024-07-21 12:04:46.513560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:47.784 [2024-07-21 12:04:46.513567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:47.784 [2024-07-21 12:04:46.513573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:47.784 [2024-07-21 12:04:46.513580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:47.784 [2024-07-21 12:04:46.513586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:47.784 [2024-07-21 12:04:46.513592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:47.784 [2024-07-21 12:04:46.513599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:47.784 [2024-07-21 12:04:46.513605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:47.784 [2024-07-21 12:04:46.513612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:47.784 [2024-07-21 12:04:46.513621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:47.784 [2024-07-21 12:04:46.513628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:47.784 [2024-07-21 12:04:46.513635] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:47.784 [2024-07-21 12:04:46.513642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:47.784 [2024-07-21 12:04:46.513650] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:47.784 [2024-07-21 12:04:46.513658] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:47.784 [2024-07-21 12:04:46.513664] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:47.784 [2024-07-21 12:04:46.513671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:47.784 [2024-07-21 12:04:46.513678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.784 [2024-07-21 12:04:46.513685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:47.784 [2024-07-21 12:04:46.513694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:22:47.784 [2024-07-21 12:04:46.513701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.784 [2024-07-21 12:04:46.536295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.784 [2024-07-21 12:04:46.536340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:47.784 [2024-07-21 12:04:46.536356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.586 ms 00:22:47.784 [2024-07-21 12:04:46.536365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.784 [2024-07-21 12:04:46.536468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.784 [2024-07-21 12:04:46.536479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:47.784 [2024-07-21 12:04:46.536489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:47.784 [2024-07-21 12:04:46.536498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.784 [2024-07-21 12:04:46.546714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.784 [2024-07-21 12:04:46.546762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:47.784 [2024-07-21 12:04:46.546777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.161 ms 00:22:47.784 [2024-07-21 12:04:46.546786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.784 [2024-07-21 12:04:46.546848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.784 [2024-07-21 12:04:46.546861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:47.784 [2024-07-21 12:04:46.546871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:47.784 [2024-07-21 12:04:46.546885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.784 [2024-07-21 12:04:46.547387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.784 [2024-07-21 12:04:46.547407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:47.784 [2024-07-21 12:04:46.547418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:22:47.784 [2024-07-21 12:04:46.547438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.784 [2024-07-21 12:04:46.547572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.784 [2024-07-21 12:04:46.547585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:47.784 [2024-07-21 12:04:46.547595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:22:47.784 [2024-07-21 12:04:46.547608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.784 [2024-07-21 12:04:46.553590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.784 [2024-07-21 12:04:46.553621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:47.784 [2024-07-21 12:04:46.553631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.967 ms 00:22:47.784 [2024-07-21 12:04:46.553638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.785 [2024-07-21 12:04:46.556327] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:47.785 [2024-07-21 12:04:46.556362] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:47.785 [2024-07-21 12:04:46.556378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.785 [2024-07-21 12:04:46.556386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:47.785 [2024-07-21 12:04:46.556394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.656 ms 00:22:47.785 [2024-07-21 12:04:46.556402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.785 [2024-07-21 12:04:46.569934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.785 [2024-07-21 12:04:46.569971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:47.785 [2024-07-21 12:04:46.569982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.504 ms 00:22:47.785 [2024-07-21 12:04:46.569990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.785 [2024-07-21 12:04:46.571945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.785 [2024-07-21 12:04:46.571977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:47.785 [2024-07-21 12:04:46.571987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.933 ms 00:22:47.785 [2024-07-21 12:04:46.571999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.785 [2024-07-21 12:04:46.573673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.785 [2024-07-21 12:04:46.573705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:47.785 [2024-07-21 12:04:46.573715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.641 ms 00:22:47.785 [2024-07-21 12:04:46.573722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.785 [2024-07-21 12:04:46.574027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.785 [2024-07-21 12:04:46.574048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:47.785 [2024-07-21 12:04:46.574056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:22:47.785 [2024-07-21 12:04:46.574067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.785 [2024-07-21 12:04:46.597313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.785 [2024-07-21 12:04:46.597375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:47.785 [2024-07-21 12:04:46.597407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.257 ms 00:22:47.785 [2024-07-21 12:04:46.597415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.785 [2024-07-21 12:04:46.603625] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:47.785 [2024-07-21 12:04:46.606649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.785 [2024-07-21 12:04:46.606677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:47.785 [2024-07-21 12:04:46.606688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.200 ms 00:22:47.785 [2024-07-21 12:04:46.606705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.785 [2024-07-21 12:04:46.606797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.785 [2024-07-21 12:04:46.606807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:47.785 [2024-07-21 12:04:46.606815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:47.785 [2024-07-21 12:04:46.606831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.785 [2024-07-21 12:04:46.608473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.785 [2024-07-21 12:04:46.608514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:47.785 [2024-07-21 12:04:46.608538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.620 ms 00:22:47.785 [2024-07-21 12:04:46.608547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.785 [2024-07-21 12:04:46.608576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.785 [2024-07-21 12:04:46.608585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:47.785 [2024-07-21 12:04:46.608597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:47.785 [2024-07-21 12:04:46.608605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.785 [2024-07-21 12:04:46.608657] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:47.785 [2024-07-21 12:04:46.608668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.785 [2024-07-21 12:04:46.608676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:47.785 [2024-07-21 12:04:46.608687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:47.785 [2024-07-21 12:04:46.608695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.785 [2024-07-21 12:04:46.612448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.785 [2024-07-21 12:04:46.612493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:47.785 [2024-07-21 12:04:46.612528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.743 ms 00:22:47.785 [2024-07-21 12:04:46.612536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.785 [2024-07-21 12:04:46.612612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.785 [2024-07-21 12:04:46.612623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:47.785 [2024-07-21 12:04:46.612639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:22:47.785 [2024-07-21 12:04:46.612656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.785 [2024-07-21 12:04:46.617723] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 117.536 ms, result 0 00:23:18.387  Copying: 1372/1048576 [kB] (1372 kBps) Copying: 11496/1048576 [kB] (10124 kBps) Copying: 47/1024 [MB] (36 MBps) Copying: 83/1024 [MB] (36 MBps) Copying: 120/1024 [MB] (36 MBps) Copying: 156/1024 [MB] (36 MBps) Copying: 194/1024 [MB] (37 MBps) Copying: 230/1024 [MB] (36 MBps) Copying: 266/1024 [MB] (35 MBps) Copying: 302/1024 [MB] (35 MBps) Copying: 339/1024 [MB] (36 MBps) Copying: 376/1024 [MB] (37 MBps) Copying: 412/1024 [MB] (36 MBps) Copying: 448/1024 [MB] (35 MBps) Copying: 485/1024 [MB] (36 MBps) Copying: 521/1024 [MB] (36 MBps) Copying: 557/1024 [MB] (36 MBps) Copying: 593/1024 [MB] (35 MBps) Copying: 630/1024 [MB] (36 MBps) Copying: 666/1024 [MB] (36 MBps) Copying: 703/1024 [MB] (36 MBps) Copying: 739/1024 [MB] (36 MBps) Copying: 775/1024 [MB] (36 MBps) Copying: 811/1024 [MB] (35 MBps) Copying: 848/1024 [MB] (36 MBps) Copying: 884/1024 [MB] (36 MBps) Copying: 920/1024 [MB] (35 MBps) Copying: 957/1024 [MB] (36 MBps) Copying: 993/1024 [MB] (36 MBps) Copying: 1024/1024 [MB] (average 34 MBps)[2024-07-21 12:05:17.094082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.387 [2024-07-21 12:05:17.094151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:18.387 [2024-07-21 12:05:17.094170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:18.387 [2024-07-21 12:05:17.094181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.387 [2024-07-21 12:05:17.094222] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:18.387 [2024-07-21 12:05:17.094944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.387 [2024-07-21 12:05:17.094963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:18.387 [2024-07-21 12:05:17.094974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.706 ms 00:23:18.387 [2024-07-21 12:05:17.094983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.387 [2024-07-21 12:05:17.095303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.387 [2024-07-21 12:05:17.095321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:18.387 [2024-07-21 12:05:17.095332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:23:18.387 [2024-07-21 12:05:17.095340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.387 [2024-07-21 12:05:17.107215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.387 [2024-07-21 12:05:17.107273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:18.387 [2024-07-21 12:05:17.107303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.878 ms 00:23:18.387 [2024-07-21 12:05:17.107311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.387 [2024-07-21 12:05:17.113189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.387 [2024-07-21 12:05:17.113226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:18.387 [2024-07-21 12:05:17.113236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.853 ms 00:23:18.387 [2024-07-21 12:05:17.113243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.387 [2024-07-21 12:05:17.114750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.387 [2024-07-21 12:05:17.114788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:18.387 [2024-07-21 12:05:17.114797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.462 ms 00:23:18.387 [2024-07-21 12:05:17.114805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.387 [2024-07-21 12:05:17.118646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.387 [2024-07-21 12:05:17.118686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:18.387 [2024-07-21 12:05:17.118696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.816 ms 00:23:18.387 [2024-07-21 12:05:17.118713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.387 [2024-07-21 12:05:17.122061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.387 [2024-07-21 12:05:17.122095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:18.387 [2024-07-21 12:05:17.122104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.323 ms 00:23:18.387 [2024-07-21 12:05:17.122111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.387 [2024-07-21 12:05:17.124330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.387 [2024-07-21 12:05:17.124363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:18.387 [2024-07-21 12:05:17.124372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.209 ms 00:23:18.387 [2024-07-21 12:05:17.124379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.387 [2024-07-21 12:05:17.125837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.387 [2024-07-21 12:05:17.125862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:18.387 [2024-07-21 12:05:17.125871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.419 ms 00:23:18.387 [2024-07-21 12:05:17.125878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.387 [2024-07-21 12:05:17.127009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.387 [2024-07-21 12:05:17.127042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:18.387 [2024-07-21 12:05:17.127052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.109 ms 00:23:18.387 [2024-07-21 12:05:17.127059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.387 [2024-07-21 12:05:17.128214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.387 [2024-07-21 12:05:17.128247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:18.387 [2024-07-21 12:05:17.128257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:23:18.387 [2024-07-21 12:05:17.128265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.387 [2024-07-21 12:05:17.128290] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:18.387 [2024-07-21 12:05:17.128306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:18.387 [2024-07-21 12:05:17.128321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:23:18.387 [2024-07-21 12:05:17.128331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:18.387 [2024-07-21 12:05:17.128905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.128912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.128919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.128927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.128934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.128941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.128949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.128956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.128963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.128970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.128977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.128984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.128991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.128998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:18.388 [2024-07-21 12:05:17.129140] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:18.388 [2024-07-21 12:05:17.129148] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d39eddfe-e90a-49b5-ad2b-4e148dc65448 00:23:18.388 [2024-07-21 12:05:17.129156] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:23:18.388 [2024-07-21 12:05:17.129164] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 158912 00:23:18.388 [2024-07-21 12:05:17.129171] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 156928 00:23:18.388 [2024-07-21 12:05:17.129190] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0126 00:23:18.388 [2024-07-21 12:05:17.129198] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:18.388 [2024-07-21 12:05:17.129213] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:18.388 [2024-07-21 12:05:17.129221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:18.388 [2024-07-21 12:05:17.129228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:18.388 [2024-07-21 12:05:17.129234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:18.388 [2024-07-21 12:05:17.129241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.388 [2024-07-21 12:05:17.129249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:18.388 [2024-07-21 12:05:17.129257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:23:18.388 [2024-07-21 12:05:17.129270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.130984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.388 [2024-07-21 12:05:17.131003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:18.388 [2024-07-21 12:05:17.131011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.700 ms 00:23:18.388 [2024-07-21 12:05:17.131018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.131138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:18.388 [2024-07-21 12:05:17.131157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:18.388 [2024-07-21 12:05:17.131172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:23:18.388 [2024-07-21 12:05:17.131187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.136743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.388 [2024-07-21 12:05:17.136800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:18.388 [2024-07-21 12:05:17.136854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.388 [2024-07-21 12:05:17.136876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.136934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.388 [2024-07-21 12:05:17.136971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:18.388 [2024-07-21 12:05:17.137006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.388 [2024-07-21 12:05:17.137026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.137108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.388 [2024-07-21 12:05:17.137146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:18.388 [2024-07-21 12:05:17.137176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.388 [2024-07-21 12:05:17.137196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.137230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.388 [2024-07-21 12:05:17.137252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:18.388 [2024-07-21 12:05:17.137286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.388 [2024-07-21 12:05:17.137312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.150500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.388 [2024-07-21 12:05:17.150640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:18.388 [2024-07-21 12:05:17.150670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.388 [2024-07-21 12:05:17.150703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.158849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.388 [2024-07-21 12:05:17.158960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:18.388 [2024-07-21 12:05:17.158996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.388 [2024-07-21 12:05:17.159016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.159080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.388 [2024-07-21 12:05:17.159103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:18.388 [2024-07-21 12:05:17.159163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.388 [2024-07-21 12:05:17.159191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.159239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.388 [2024-07-21 12:05:17.159277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:18.388 [2024-07-21 12:05:17.159307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.388 [2024-07-21 12:05:17.159341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.159446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.388 [2024-07-21 12:05:17.159488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:18.388 [2024-07-21 12:05:17.159516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.388 [2024-07-21 12:05:17.159535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.159621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.388 [2024-07-21 12:05:17.159658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:18.388 [2024-07-21 12:05:17.159689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.388 [2024-07-21 12:05:17.159725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.159788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.388 [2024-07-21 12:05:17.159827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:18.388 [2024-07-21 12:05:17.159855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.388 [2024-07-21 12:05:17.159874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.159966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:18.388 [2024-07-21 12:05:17.159996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:18.388 [2024-07-21 12:05:17.160021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:18.388 [2024-07-21 12:05:17.160052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:18.388 [2024-07-21 12:05:17.160196] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 66.211 ms, result 0 00:23:18.647 00:23:18.647 00:23:18.647 12:05:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:20.550 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:20.550 12:05:19 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:20.550 [2024-07-21 12:05:19.306497] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:20.550 [2024-07-21 12:05:19.306613] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93784 ] 00:23:20.809 [2024-07-21 12:05:19.468263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.809 [2024-07-21 12:05:19.516521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.809 [2024-07-21 12:05:19.617752] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:20.809 [2024-07-21 12:05:19.617845] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:21.068 [2024-07-21 12:05:19.765388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.068 [2024-07-21 12:05:19.765443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:21.068 [2024-07-21 12:05:19.765458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:21.068 [2024-07-21 12:05:19.765473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.068 [2024-07-21 12:05:19.765525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.068 [2024-07-21 12:05:19.765536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:21.068 [2024-07-21 12:05:19.765544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:21.069 [2024-07-21 12:05:19.765560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.069 [2024-07-21 12:05:19.765580] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:21.069 [2024-07-21 12:05:19.765780] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:21.069 [2024-07-21 12:05:19.765801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.069 [2024-07-21 12:05:19.765813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:21.069 [2024-07-21 12:05:19.765841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:23:21.069 [2024-07-21 12:05:19.765848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.069 [2024-07-21 12:05:19.767232] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:21.069 [2024-07-21 12:05:19.769740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.069 [2024-07-21 12:05:19.769774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:21.069 [2024-07-21 12:05:19.769789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.514 ms 00:23:21.069 [2024-07-21 12:05:19.769795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.069 [2024-07-21 12:05:19.769863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.069 [2024-07-21 12:05:19.769873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:21.069 [2024-07-21 12:05:19.769893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:23:21.069 [2024-07-21 12:05:19.769901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.069 [2024-07-21 12:05:19.776510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.069 [2024-07-21 12:05:19.776541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:21.069 [2024-07-21 12:05:19.776551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.565 ms 00:23:21.069 [2024-07-21 12:05:19.776558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.069 [2024-07-21 12:05:19.776643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.069 [2024-07-21 12:05:19.776657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:21.069 [2024-07-21 12:05:19.776666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:21.069 [2024-07-21 12:05:19.776673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.069 [2024-07-21 12:05:19.776727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.069 [2024-07-21 12:05:19.776741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:21.069 [2024-07-21 12:05:19.776751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:21.069 [2024-07-21 12:05:19.776758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.069 [2024-07-21 12:05:19.776788] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:21.069 [2024-07-21 12:05:19.778408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.069 [2024-07-21 12:05:19.778432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:21.069 [2024-07-21 12:05:19.778441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.635 ms 00:23:21.069 [2024-07-21 12:05:19.778459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.069 [2024-07-21 12:05:19.778490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.069 [2024-07-21 12:05:19.778506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:21.069 [2024-07-21 12:05:19.778524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:21.069 [2024-07-21 12:05:19.778532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.069 [2024-07-21 12:05:19.778553] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:21.069 [2024-07-21 12:05:19.778580] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:21.069 [2024-07-21 12:05:19.778615] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:21.069 [2024-07-21 12:05:19.778634] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:23:21.069 [2024-07-21 12:05:19.778719] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:21.069 [2024-07-21 12:05:19.778738] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:21.069 [2024-07-21 12:05:19.778747] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:23:21.069 [2024-07-21 12:05:19.778764] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:21.069 [2024-07-21 12:05:19.778773] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:21.069 [2024-07-21 12:05:19.778781] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:21.069 [2024-07-21 12:05:19.778789] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:21.069 [2024-07-21 12:05:19.778796] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:21.069 [2024-07-21 12:05:19.778803] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:21.069 [2024-07-21 12:05:19.778811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.069 [2024-07-21 12:05:19.778832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:21.069 [2024-07-21 12:05:19.778848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:23:21.069 [2024-07-21 12:05:19.778860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.069 [2024-07-21 12:05:19.778937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.069 [2024-07-21 12:05:19.778946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:21.069 [2024-07-21 12:05:19.778953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:21.069 [2024-07-21 12:05:19.778960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.069 [2024-07-21 12:05:19.779043] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:21.069 [2024-07-21 12:05:19.779053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:21.069 [2024-07-21 12:05:19.779062] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:21.069 [2024-07-21 12:05:19.779069] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.069 [2024-07-21 12:05:19.779080] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:21.069 [2024-07-21 12:05:19.779086] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:21.069 [2024-07-21 12:05:19.779093] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:21.069 [2024-07-21 12:05:19.779100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:21.069 [2024-07-21 12:05:19.779107] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:21.069 [2024-07-21 12:05:19.779114] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:21.069 [2024-07-21 12:05:19.779121] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:21.069 [2024-07-21 12:05:19.779128] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:21.069 [2024-07-21 12:05:19.779146] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:21.069 [2024-07-21 12:05:19.779153] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:21.069 [2024-07-21 12:05:19.779160] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:21.069 [2024-07-21 12:05:19.779167] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.069 [2024-07-21 12:05:19.779173] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:21.069 [2024-07-21 12:05:19.779180] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:21.069 [2024-07-21 12:05:19.779187] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.069 [2024-07-21 12:05:19.779194] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:21.069 [2024-07-21 12:05:19.779200] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:21.069 [2024-07-21 12:05:19.779207] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.069 [2024-07-21 12:05:19.779213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:21.069 [2024-07-21 12:05:19.779220] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:21.069 [2024-07-21 12:05:19.779227] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.069 [2024-07-21 12:05:19.779233] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:21.069 [2024-07-21 12:05:19.779239] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:21.069 [2024-07-21 12:05:19.779254] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.069 [2024-07-21 12:05:19.779265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:21.069 [2024-07-21 12:05:19.779272] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:21.069 [2024-07-21 12:05:19.779279] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:21.069 [2024-07-21 12:05:19.779285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:21.069 [2024-07-21 12:05:19.779292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:21.069 [2024-07-21 12:05:19.779298] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:21.069 [2024-07-21 12:05:19.779304] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:21.069 [2024-07-21 12:05:19.779311] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:21.069 [2024-07-21 12:05:19.779317] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:21.069 [2024-07-21 12:05:19.779324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:21.069 [2024-07-21 12:05:19.779330] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:21.069 [2024-07-21 12:05:19.779337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.069 [2024-07-21 12:05:19.779343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:21.069 [2024-07-21 12:05:19.779352] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:21.069 [2024-07-21 12:05:19.779360] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.069 [2024-07-21 12:05:19.779366] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:21.069 [2024-07-21 12:05:19.779376] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:21.069 [2024-07-21 12:05:19.779384] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:21.069 [2024-07-21 12:05:19.779398] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:21.069 [2024-07-21 12:05:19.779405] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:21.069 [2024-07-21 12:05:19.779412] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:21.069 [2024-07-21 12:05:19.779419] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:21.069 [2024-07-21 12:05:19.779425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:21.069 [2024-07-21 12:05:19.779432] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:21.069 [2024-07-21 12:05:19.779439] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:21.069 [2024-07-21 12:05:19.779447] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:21.069 [2024-07-21 12:05:19.779456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:21.069 [2024-07-21 12:05:19.779472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:21.069 [2024-07-21 12:05:19.779479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:21.069 [2024-07-21 12:05:19.779486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:21.069 [2024-07-21 12:05:19.779493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:21.069 [2024-07-21 12:05:19.779500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:21.069 [2024-07-21 12:05:19.779510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:21.069 [2024-07-21 12:05:19.779517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:21.069 [2024-07-21 12:05:19.779524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:21.069 [2024-07-21 12:05:19.779531] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:21.069 [2024-07-21 12:05:19.779538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:21.069 [2024-07-21 12:05:19.779545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:21.069 [2024-07-21 12:05:19.779552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:21.069 [2024-07-21 12:05:19.779559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:21.069 [2024-07-21 12:05:19.779566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:21.069 [2024-07-21 12:05:19.779572] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:21.069 [2024-07-21 12:05:19.779580] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:21.069 [2024-07-21 12:05:19.779588] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:21.069 [2024-07-21 12:05:19.779595] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:21.069 [2024-07-21 12:05:19.779602] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:21.069 [2024-07-21 12:05:19.779610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:21.069 [2024-07-21 12:05:19.779618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.069 [2024-07-21 12:05:19.779628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:21.069 [2024-07-21 12:05:19.779638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:23:21.069 [2024-07-21 12:05:19.779645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.069 [2024-07-21 12:05:19.799233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.069 [2024-07-21 12:05:19.799367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:21.069 [2024-07-21 12:05:19.799409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.577 ms 00:23:21.069 [2024-07-21 12:05:19.799444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.069 [2024-07-21 12:05:19.799571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.069 [2024-07-21 12:05:19.799601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:21.069 [2024-07-21 12:05:19.799688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:23:21.069 [2024-07-21 12:05:19.799715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.069 [2024-07-21 12:05:19.810019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.810113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:21.070 [2024-07-21 12:05:19.810143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.225 ms 00:23:21.070 [2024-07-21 12:05:19.810177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.810232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.810255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:21.070 [2024-07-21 12:05:19.810275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:21.070 [2024-07-21 12:05:19.810298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.810771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.810850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:21.070 [2024-07-21 12:05:19.810894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:23:21.070 [2024-07-21 12:05:19.810916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.811048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.811090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:21.070 [2024-07-21 12:05:19.811118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:23:21.070 [2024-07-21 12:05:19.811145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.816864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.816945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:21.070 [2024-07-21 12:05:19.816993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.684 ms 00:23:21.070 [2024-07-21 12:05:19.817014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.819584] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:21.070 [2024-07-21 12:05:19.819668] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:21.070 [2024-07-21 12:05:19.819709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.819732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:21.070 [2024-07-21 12:05:19.819753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.589 ms 00:23:21.070 [2024-07-21 12:05:19.819771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.833014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.833086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:21.070 [2024-07-21 12:05:19.833119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.207 ms 00:23:21.070 [2024-07-21 12:05:19.833140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.835113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.835174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:21.070 [2024-07-21 12:05:19.835214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.928 ms 00:23:21.070 [2024-07-21 12:05:19.835231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.836867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.836929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:21.070 [2024-07-21 12:05:19.836959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.575 ms 00:23:21.070 [2024-07-21 12:05:19.836994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.837297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.837350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:21.070 [2024-07-21 12:05:19.837381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:23:21.070 [2024-07-21 12:05:19.837424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.859290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.859436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:21.070 [2024-07-21 12:05:19.859467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.867 ms 00:23:21.070 [2024-07-21 12:05:19.859501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.865607] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:21.070 [2024-07-21 12:05:19.868638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.868714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:21.070 [2024-07-21 12:05:19.868741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.096 ms 00:23:21.070 [2024-07-21 12:05:19.868761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.868853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.868880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:21.070 [2024-07-21 12:05:19.868900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:21.070 [2024-07-21 12:05:19.868926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.869753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.869803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:21.070 [2024-07-21 12:05:19.869841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:23:21.070 [2024-07-21 12:05:19.869856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.869886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.869895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:21.070 [2024-07-21 12:05:19.869903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:21.070 [2024-07-21 12:05:19.869910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.869952] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:21.070 [2024-07-21 12:05:19.869962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.869977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:21.070 [2024-07-21 12:05:19.869988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:21.070 [2024-07-21 12:05:19.869997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.873699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.873732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:21.070 [2024-07-21 12:05:19.873742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.668 ms 00:23:21.070 [2024-07-21 12:05:19.873749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.873812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:21.070 [2024-07-21 12:05:19.873836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:21.070 [2024-07-21 12:05:19.873845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:21.070 [2024-07-21 12:05:19.873857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:21.070 [2024-07-21 12:05:19.874850] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 109.220 ms, result 0 00:23:56.146  Copying: 31/1024 [MB] (31 MBps) Copying: 62/1024 [MB] (30 MBps) Copying: 93/1024 [MB] (31 MBps) Copying: 124/1024 [MB] (31 MBps) Copying: 156/1024 [MB] (31 MBps) Copying: 186/1024 [MB] (30 MBps) Copying: 217/1024 [MB] (30 MBps) Copying: 248/1024 [MB] (30 MBps) Copying: 279/1024 [MB] (30 MBps) Copying: 308/1024 [MB] (29 MBps) Copying: 337/1024 [MB] (29 MBps) Copying: 366/1024 [MB] (28 MBps) Copying: 394/1024 [MB] (28 MBps) Copying: 422/1024 [MB] (28 MBps) Copying: 451/1024 [MB] (28 MBps) Copying: 479/1024 [MB] (28 MBps) Copying: 508/1024 [MB] (28 MBps) Copying: 536/1024 [MB] (28 MBps) Copying: 565/1024 [MB] (28 MBps) Copying: 593/1024 [MB] (28 MBps) Copying: 622/1024 [MB] (28 MBps) Copying: 651/1024 [MB] (28 MBps) Copying: 679/1024 [MB] (28 MBps) Copying: 708/1024 [MB] (28 MBps) Copying: 736/1024 [MB] (28 MBps) Copying: 764/1024 [MB] (28 MBps) Copying: 793/1024 [MB] (29 MBps) Copying: 822/1024 [MB] (28 MBps) Copying: 850/1024 [MB] (28 MBps) Copying: 879/1024 [MB] (28 MBps) Copying: 907/1024 [MB] (28 MBps) Copying: 936/1024 [MB] (29 MBps) Copying: 965/1024 [MB] (29 MBps) Copying: 994/1024 [MB] (29 MBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-21 12:05:54.962703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.146 [2024-07-21 12:05:54.962755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:56.146 [2024-07-21 12:05:54.962775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:56.146 [2024-07-21 12:05:54.962783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.146 [2024-07-21 12:05:54.962801] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:56.146 [2024-07-21 12:05:54.963504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.146 [2024-07-21 12:05:54.963519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:56.146 [2024-07-21 12:05:54.963528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:23:56.146 [2024-07-21 12:05:54.963535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.146 [2024-07-21 12:05:54.963713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.146 [2024-07-21 12:05:54.963722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:56.146 [2024-07-21 12:05:54.963733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:23:56.146 [2024-07-21 12:05:54.963740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.146 [2024-07-21 12:05:54.966690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.146 [2024-07-21 12:05:54.966741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:56.147 [2024-07-21 12:05:54.966791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.943 ms 00:23:56.147 [2024-07-21 12:05:54.966810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.147 [2024-07-21 12:05:54.972302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.147 [2024-07-21 12:05:54.972368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:56.147 [2024-07-21 12:05:54.972394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.462 ms 00:23:56.147 [2024-07-21 12:05:54.972431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.147 [2024-07-21 12:05:54.974085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.147 [2024-07-21 12:05:54.974153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:56.147 [2024-07-21 12:05:54.974180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.608 ms 00:23:56.147 [2024-07-21 12:05:54.974200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.147 [2024-07-21 12:05:54.978301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.147 [2024-07-21 12:05:54.978375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:56.147 [2024-07-21 12:05:54.978406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.080 ms 00:23:56.147 [2024-07-21 12:05:54.978445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.147 [2024-07-21 12:05:54.982198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.147 [2024-07-21 12:05:54.982269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:56.147 [2024-07-21 12:05:54.982297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.713 ms 00:23:56.147 [2024-07-21 12:05:54.982323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.147 [2024-07-21 12:05:54.984304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.147 [2024-07-21 12:05:54.984366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:56.147 [2024-07-21 12:05:54.984413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.956 ms 00:23:56.147 [2024-07-21 12:05:54.984433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.147 [2024-07-21 12:05:54.985970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.147 [2024-07-21 12:05:54.986029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:56.147 [2024-07-21 12:05:54.986054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:23:56.147 [2024-07-21 12:05:54.986074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.147 [2024-07-21 12:05:54.987244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.147 [2024-07-21 12:05:54.987310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:56.147 [2024-07-21 12:05:54.987340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.144 ms 00:23:56.147 [2024-07-21 12:05:54.987386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.147 [2024-07-21 12:05:54.988573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.147 [2024-07-21 12:05:54.988650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:56.147 [2024-07-21 12:05:54.988697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.137 ms 00:23:56.147 [2024-07-21 12:05:54.988717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.147 [2024-07-21 12:05:54.988756] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:56.147 [2024-07-21 12:05:54.988792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:56.147 [2024-07-21 12:05:54.988845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3840 / 261120 wr_cnt: 1 state: open 00:23:56.147 [2024-07-21 12:05:54.988882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.988912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.988952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.988980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:56.147 [2024-07-21 12:05:54.989927] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:56.147 [2024-07-21 12:05:54.989935] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: d39eddfe-e90a-49b5-ad2b-4e148dc65448 00:23:56.147 [2024-07-21 12:05:54.989954] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264960 00:23:56.147 [2024-07-21 12:05:54.989961] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:56.147 [2024-07-21 12:05:54.989968] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:56.148 [2024-07-21 12:05:54.989975] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:56.148 [2024-07-21 12:05:54.989982] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:56.148 [2024-07-21 12:05:54.989996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:56.148 [2024-07-21 12:05:54.990015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:56.148 [2024-07-21 12:05:54.990021] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:56.148 [2024-07-21 12:05:54.990027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:56.148 [2024-07-21 12:05:54.990035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.148 [2024-07-21 12:05:54.990043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:56.148 [2024-07-21 12:05:54.990057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:23:56.148 [2024-07-21 12:05:54.990063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.148 [2024-07-21 12:05:54.991862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.148 [2024-07-21 12:05:54.991912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:56.148 [2024-07-21 12:05:54.991941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.785 ms 00:23:56.148 [2024-07-21 12:05:54.991965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.148 [2024-07-21 12:05:54.992099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.148 [2024-07-21 12:05:54.992135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:56.148 [2024-07-21 12:05:54.992160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:23:56.148 [2024-07-21 12:05:54.992205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.148 [2024-07-21 12:05:54.997596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.148 [2024-07-21 12:05:54.997658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:56.148 [2024-07-21 12:05:54.997707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.148 [2024-07-21 12:05:54.997726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.148 [2024-07-21 12:05:54.997788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.148 [2024-07-21 12:05:54.997837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:56.148 [2024-07-21 12:05:54.997865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.148 [2024-07-21 12:05:54.997884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.148 [2024-07-21 12:05:54.997982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.148 [2024-07-21 12:05:54.998019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:56.148 [2024-07-21 12:05:54.998046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.148 [2024-07-21 12:05:54.998077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.148 [2024-07-21 12:05:54.998106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.148 [2024-07-21 12:05:54.998126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:56.148 [2024-07-21 12:05:54.998151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.148 [2024-07-21 12:05:54.998174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.405 [2024-07-21 12:05:55.011077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.405 [2024-07-21 12:05:55.011204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:56.405 [2024-07-21 12:05:55.011232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.405 [2024-07-21 12:05:55.011263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.405 [2024-07-21 12:05:55.019285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.405 [2024-07-21 12:05:55.019389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:56.405 [2024-07-21 12:05:55.019417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.405 [2024-07-21 12:05:55.019436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.405 [2024-07-21 12:05:55.019496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.405 [2024-07-21 12:05:55.019518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:56.405 [2024-07-21 12:05:55.019538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.405 [2024-07-21 12:05:55.019566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.405 [2024-07-21 12:05:55.019611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.405 [2024-07-21 12:05:55.019632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:56.405 [2024-07-21 12:05:55.019659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.405 [2024-07-21 12:05:55.019694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.405 [2024-07-21 12:05:55.019791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.405 [2024-07-21 12:05:55.019836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:56.405 [2024-07-21 12:05:55.019866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.405 [2024-07-21 12:05:55.019886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.405 [2024-07-21 12:05:55.019939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.405 [2024-07-21 12:05:55.019983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:56.405 [2024-07-21 12:05:55.020011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.405 [2024-07-21 12:05:55.020031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.405 [2024-07-21 12:05:55.020081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.405 [2024-07-21 12:05:55.020111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:56.405 [2024-07-21 12:05:55.020139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.405 [2024-07-21 12:05:55.020159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.405 [2024-07-21 12:05:55.020217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:56.405 [2024-07-21 12:05:55.020251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:56.405 [2024-07-21 12:05:55.020275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:56.405 [2024-07-21 12:05:55.020301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.405 [2024-07-21 12:05:55.020444] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 57.808 ms, result 0 00:23:56.405 00:23:56.405 00:23:56.663 12:05:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:23:58.050 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:23:58.050 12:05:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:23:58.050 12:05:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:23:58.050 12:05:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:58.050 12:05:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:58.313 12:05:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:23:58.313 12:05:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:58.313 12:05:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:23:58.313 12:05:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 92253 00:23:58.313 12:05:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@946 -- # '[' -z 92253 ']' 00:23:58.313 Process with pid 92253 is not found 00:23:58.313 12:05:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # kill -0 92253 00:23:58.313 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (92253) - No such process 00:23:58.313 12:05:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@973 -- # echo 'Process with pid 92253 is not found' 00:23:58.313 12:05:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:23:58.570 Remove shared memory files 00:23:58.570 12:05:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:23:58.570 12:05:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:58.570 12:05:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:23:58.570 12:05:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:23:58.570 12:05:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:23:58.570 12:05:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:58.570 12:05:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:23:58.570 ************************************ 00:23:58.570 END TEST ftl_dirty_shutdown 00:23:58.570 ************************************ 00:23:58.570 00:23:58.570 real 3m1.418s 00:23:58.570 user 3m25.715s 00:23:58.570 sys 0m27.584s 00:23:58.570 12:05:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:58.570 12:05:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:58.570 12:05:57 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:23:58.570 12:05:57 ftl -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:23:58.570 12:05:57 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:58.570 12:05:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:58.570 ************************************ 00:23:58.570 START TEST ftl_upgrade_shutdown 00:23:58.570 ************************************ 00:23:58.570 12:05:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:23:58.830 * Looking for test storage... 00:23:58.830 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94242 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94242 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 94242 ']' 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:58.830 12:05:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:58.830 [2024-07-21 12:05:57.647430] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:23:58.830 [2024-07-21 12:05:57.647660] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94242 ] 00:23:59.089 [2024-07-21 12:05:57.813221] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.089 [2024-07-21 12:05:57.857804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:23:59.657 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:23:59.658 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:23:59.658 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:59.658 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:23:59.658 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:59.658 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:23:59.916 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:23:59.916 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:59.916 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:23:59.916 12:05:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=basen1 00:23:59.916 12:05:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:23:59.916 12:05:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:23:59.916 12:05:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:23:59.916 12:05:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:24:00.175 12:05:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:00.175 { 00:24:00.175 "name": "basen1", 00:24:00.175 "aliases": [ 00:24:00.175 "73091600-c5ae-4a0c-8141-9a44f4229b74" 00:24:00.175 ], 00:24:00.175 "product_name": "NVMe disk", 00:24:00.175 "block_size": 4096, 00:24:00.175 "num_blocks": 1310720, 00:24:00.175 "uuid": "73091600-c5ae-4a0c-8141-9a44f4229b74", 00:24:00.175 "assigned_rate_limits": { 00:24:00.175 "rw_ios_per_sec": 0, 00:24:00.175 "rw_mbytes_per_sec": 0, 00:24:00.175 "r_mbytes_per_sec": 0, 00:24:00.175 "w_mbytes_per_sec": 0 00:24:00.175 }, 00:24:00.175 "claimed": true, 00:24:00.175 "claim_type": "read_many_write_one", 00:24:00.175 "zoned": false, 00:24:00.175 "supported_io_types": { 00:24:00.175 "read": true, 00:24:00.175 "write": true, 00:24:00.175 "unmap": true, 00:24:00.175 "write_zeroes": true, 00:24:00.175 "flush": true, 00:24:00.175 "reset": true, 00:24:00.175 "compare": true, 00:24:00.175 "compare_and_write": false, 00:24:00.175 "abort": true, 00:24:00.175 "nvme_admin": true, 00:24:00.175 "nvme_io": true 00:24:00.175 }, 00:24:00.175 "driver_specific": { 00:24:00.175 "nvme": [ 00:24:00.175 { 00:24:00.175 "pci_address": "0000:00:11.0", 00:24:00.175 "trid": { 00:24:00.176 "trtype": "PCIe", 00:24:00.176 "traddr": "0000:00:11.0" 00:24:00.176 }, 00:24:00.176 "ctrlr_data": { 00:24:00.176 "cntlid": 0, 00:24:00.176 "vendor_id": "0x1b36", 00:24:00.176 "model_number": "QEMU NVMe Ctrl", 00:24:00.176 "serial_number": "12341", 00:24:00.176 "firmware_revision": "8.0.0", 00:24:00.176 "subnqn": "nqn.2019-08.org.qemu:12341", 00:24:00.176 "oacs": { 00:24:00.176 "security": 0, 00:24:00.176 "format": 1, 00:24:00.176 "firmware": 0, 00:24:00.176 "ns_manage": 1 00:24:00.176 }, 00:24:00.176 "multi_ctrlr": false, 00:24:00.176 "ana_reporting": false 00:24:00.176 }, 00:24:00.176 "vs": { 00:24:00.176 "nvme_version": "1.4" 00:24:00.176 }, 00:24:00.176 "ns_data": { 00:24:00.176 "id": 1, 00:24:00.176 "can_share": false 00:24:00.176 } 00:24:00.176 } 00:24:00.176 ], 00:24:00.176 "mp_policy": "active_passive" 00:24:00.176 } 00:24:00.176 } 00:24:00.176 ]' 00:24:00.176 12:05:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:00.176 12:05:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:24:00.176 12:05:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:00.176 12:05:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=1310720 00:24:00.176 12:05:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:24:00.176 12:05:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 5120 00:24:00.176 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:24:00.176 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:24:00.176 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:24:00.176 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:00.176 12:05:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:24:00.435 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=e41e9d2e-00e4-4c5a-9a0d-09bb0504b22a 00:24:00.435 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:24:00.435 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e41e9d2e-00e4-4c5a-9a0d-09bb0504b22a 00:24:00.694 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:24:00.694 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=d4ebdc6e-ffc8-44d6-88c7-662d5d9c1deb 00:24:00.694 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u d4ebdc6e-ffc8-44d6-88c7-662d5d9c1deb 00:24:00.953 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=2931efd8-a050-4dc2-9e5b-7d59931fb144 00:24:00.953 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 2931efd8-a050-4dc2-9e5b-7d59931fb144 ]] 00:24:00.953 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 2931efd8-a050-4dc2-9e5b-7d59931fb144 5120 00:24:00.953 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:24:00.953 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:24:00.953 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=2931efd8-a050-4dc2-9e5b-7d59931fb144 00:24:00.953 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:24:00.953 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 2931efd8-a050-4dc2-9e5b-7d59931fb144 00:24:00.953 12:05:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1374 -- # local bdev_name=2931efd8-a050-4dc2-9e5b-7d59931fb144 00:24:00.953 12:05:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1375 -- # local bdev_info 00:24:00.953 12:05:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1376 -- # local bs 00:24:00.953 12:05:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1377 -- # local nb 00:24:00.953 12:05:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 2931efd8-a050-4dc2-9e5b-7d59931fb144 00:24:01.212 12:05:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:24:01.212 { 00:24:01.212 "name": "2931efd8-a050-4dc2-9e5b-7d59931fb144", 00:24:01.212 "aliases": [ 00:24:01.212 "lvs/basen1p0" 00:24:01.212 ], 00:24:01.212 "product_name": "Logical Volume", 00:24:01.212 "block_size": 4096, 00:24:01.212 "num_blocks": 5242880, 00:24:01.212 "uuid": "2931efd8-a050-4dc2-9e5b-7d59931fb144", 00:24:01.212 "assigned_rate_limits": { 00:24:01.212 "rw_ios_per_sec": 0, 00:24:01.212 "rw_mbytes_per_sec": 0, 00:24:01.212 "r_mbytes_per_sec": 0, 00:24:01.212 "w_mbytes_per_sec": 0 00:24:01.212 }, 00:24:01.212 "claimed": false, 00:24:01.212 "zoned": false, 00:24:01.212 "supported_io_types": { 00:24:01.212 "read": true, 00:24:01.212 "write": true, 00:24:01.212 "unmap": true, 00:24:01.212 "write_zeroes": true, 00:24:01.212 "flush": false, 00:24:01.212 "reset": true, 00:24:01.212 "compare": false, 00:24:01.212 "compare_and_write": false, 00:24:01.212 "abort": false, 00:24:01.212 "nvme_admin": false, 00:24:01.212 "nvme_io": false 00:24:01.212 }, 00:24:01.212 "driver_specific": { 00:24:01.212 "lvol": { 00:24:01.212 "lvol_store_uuid": "d4ebdc6e-ffc8-44d6-88c7-662d5d9c1deb", 00:24:01.212 "base_bdev": "basen1", 00:24:01.212 "thin_provision": true, 00:24:01.212 "num_allocated_clusters": 0, 00:24:01.212 "snapshot": false, 00:24:01.212 "clone": false, 00:24:01.212 "esnap_clone": false 00:24:01.212 } 00:24:01.212 } 00:24:01.212 } 00:24:01.212 ]' 00:24:01.212 12:05:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:24:01.212 12:05:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # bs=4096 00:24:01.212 12:05:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:24:01.212 12:05:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # nb=5242880 00:24:01.212 12:05:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bdev_size=20480 00:24:01.212 12:05:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # echo 20480 00:24:01.212 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:24:01.212 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:24:01.212 12:05:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:24:01.471 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:24:01.471 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:24:01.471 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:24:01.730 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:24:01.730 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:24:01.731 12:06:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 2931efd8-a050-4dc2-9e5b-7d59931fb144 -c cachen1p0 --l2p_dram_limit 2 00:24:01.731 [2024-07-21 12:06:00.500867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:01.731 [2024-07-21 12:06:00.500916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:01.731 [2024-07-21 12:06:00.500932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:01.731 [2024-07-21 12:06:00.500956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:01.731 [2024-07-21 12:06:00.501052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:01.731 [2024-07-21 12:06:00.501063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:01.731 [2024-07-21 12:06:00.501072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.066 ms 00:24:01.731 [2024-07-21 12:06:00.501081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:01.731 [2024-07-21 12:06:00.501110] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:01.731 [2024-07-21 12:06:00.501410] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:01.731 [2024-07-21 12:06:00.501437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:01.731 [2024-07-21 12:06:00.501444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:01.731 [2024-07-21 12:06:00.501454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.340 ms 00:24:01.731 [2024-07-21 12:06:00.501461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:01.731 [2024-07-21 12:06:00.501492] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 19130d7f-9734-4b81-99ba-8321fb54dc37 00:24:01.731 [2024-07-21 12:06:00.502926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:01.731 [2024-07-21 12:06:00.502953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:24:01.731 [2024-07-21 12:06:00.502981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:24:01.731 [2024-07-21 12:06:00.502991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:01.731 [2024-07-21 12:06:00.510389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:01.731 [2024-07-21 12:06:00.510428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:01.731 [2024-07-21 12:06:00.510437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.367 ms 00:24:01.731 [2024-07-21 12:06:00.510447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:01.731 [2024-07-21 12:06:00.510551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:01.731 [2024-07-21 12:06:00.510570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:01.731 [2024-07-21 12:06:00.510579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:24:01.731 [2024-07-21 12:06:00.510588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:01.731 [2024-07-21 12:06:00.510651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:01.731 [2024-07-21 12:06:00.510669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:01.731 [2024-07-21 12:06:00.510677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:24:01.731 [2024-07-21 12:06:00.510687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:01.731 [2024-07-21 12:06:00.510711] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:01.731 [2024-07-21 12:06:00.512466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:01.731 [2024-07-21 12:06:00.512507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:01.731 [2024-07-21 12:06:00.512519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.764 ms 00:24:01.731 [2024-07-21 12:06:00.512526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:01.731 [2024-07-21 12:06:00.512564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:01.731 [2024-07-21 12:06:00.512572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:01.731 [2024-07-21 12:06:00.512581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:01.731 [2024-07-21 12:06:00.512588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:01.731 [2024-07-21 12:06:00.512608] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:24:01.731 [2024-07-21 12:06:00.512739] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:01.731 [2024-07-21 12:06:00.512753] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:01.731 [2024-07-21 12:06:00.512764] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:24:01.731 [2024-07-21 12:06:00.512775] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:01.731 [2024-07-21 12:06:00.512783] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:01.731 [2024-07-21 12:06:00.512792] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:01.731 [2024-07-21 12:06:00.512800] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:01.731 [2024-07-21 12:06:00.512811] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:01.731 [2024-07-21 12:06:00.512831] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:01.731 [2024-07-21 12:06:00.512841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:01.731 [2024-07-21 12:06:00.512848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:01.731 [2024-07-21 12:06:00.512858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.237 ms 00:24:01.731 [2024-07-21 12:06:00.512865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:01.731 [2024-07-21 12:06:00.512935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:01.731 [2024-07-21 12:06:00.512943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:01.731 [2024-07-21 12:06:00.512968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:24:01.731 [2024-07-21 12:06:00.512982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:01.731 [2024-07-21 12:06:00.513091] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:01.731 [2024-07-21 12:06:00.513106] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:01.731 [2024-07-21 12:06:00.513118] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:01.731 [2024-07-21 12:06:00.513125] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:01.731 [2024-07-21 12:06:00.513135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:01.731 [2024-07-21 12:06:00.513141] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:01.731 [2024-07-21 12:06:00.513150] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:01.731 [2024-07-21 12:06:00.513156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:01.731 [2024-07-21 12:06:00.513165] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:01.731 [2024-07-21 12:06:00.513171] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:01.731 [2024-07-21 12:06:00.513181] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:01.731 [2024-07-21 12:06:00.513187] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:01.731 [2024-07-21 12:06:00.513195] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:01.731 [2024-07-21 12:06:00.513202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:01.731 [2024-07-21 12:06:00.513214] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:01.731 [2024-07-21 12:06:00.513221] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:01.731 [2024-07-21 12:06:00.513229] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:01.731 [2024-07-21 12:06:00.513235] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:01.731 [2024-07-21 12:06:00.513244] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:01.731 [2024-07-21 12:06:00.513250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:01.731 [2024-07-21 12:06:00.513259] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:01.731 [2024-07-21 12:06:00.513266] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:01.731 [2024-07-21 12:06:00.513275] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:01.731 [2024-07-21 12:06:00.513281] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:01.731 [2024-07-21 12:06:00.513291] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:01.731 [2024-07-21 12:06:00.513297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:01.731 [2024-07-21 12:06:00.513306] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:01.731 [2024-07-21 12:06:00.513312] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:01.731 [2024-07-21 12:06:00.513321] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:01.731 [2024-07-21 12:06:00.513327] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:01.731 [2024-07-21 12:06:00.513337] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:01.731 [2024-07-21 12:06:00.513343] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:01.731 [2024-07-21 12:06:00.513351] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:01.731 [2024-07-21 12:06:00.513358] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:01.731 [2024-07-21 12:06:00.513366] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:01.731 [2024-07-21 12:06:00.513373] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:01.731 [2024-07-21 12:06:00.513381] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:01.731 [2024-07-21 12:06:00.513387] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:01.731 [2024-07-21 12:06:00.513396] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:01.731 [2024-07-21 12:06:00.513402] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:01.731 [2024-07-21 12:06:00.513410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:01.731 [2024-07-21 12:06:00.513416] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:01.731 [2024-07-21 12:06:00.513425] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:01.731 [2024-07-21 12:06:00.513431] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:01.731 [2024-07-21 12:06:00.513441] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:01.731 [2024-07-21 12:06:00.513448] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:01.731 [2024-07-21 12:06:00.513460] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:01.731 [2024-07-21 12:06:00.513470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:01.731 [2024-07-21 12:06:00.513480] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:01.731 [2024-07-21 12:06:00.513487] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:01.731 [2024-07-21 12:06:00.513495] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:01.731 [2024-07-21 12:06:00.513501] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:01.732 [2024-07-21 12:06:00.513510] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:01.732 [2024-07-21 12:06:00.513521] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:01.732 [2024-07-21 12:06:00.513540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:01.732 [2024-07-21 12:06:00.513557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:01.732 [2024-07-21 12:06:00.513567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:01.732 [2024-07-21 12:06:00.513574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:01.732 [2024-07-21 12:06:00.513584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:01.732 [2024-07-21 12:06:00.513591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:01.732 [2024-07-21 12:06:00.513599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:01.732 [2024-07-21 12:06:00.513606] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:01.732 [2024-07-21 12:06:00.513616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:01.732 [2024-07-21 12:06:00.513623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:01.732 [2024-07-21 12:06:00.513632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:01.732 [2024-07-21 12:06:00.513639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:01.732 [2024-07-21 12:06:00.513648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:01.732 [2024-07-21 12:06:00.513654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:01.732 [2024-07-21 12:06:00.513664] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:01.732 [2024-07-21 12:06:00.513671] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:01.732 [2024-07-21 12:06:00.513681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:01.732 [2024-07-21 12:06:00.513695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:01.732 [2024-07-21 12:06:00.513730] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:01.732 [2024-07-21 12:06:00.513738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:01.732 [2024-07-21 12:06:00.513747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:01.732 [2024-07-21 12:06:00.513755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:01.732 [2024-07-21 12:06:00.513764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:01.732 [2024-07-21 12:06:00.513771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.735 ms 00:24:01.732 [2024-07-21 12:06:00.513784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:01.732 [2024-07-21 12:06:00.513839] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:24:01.732 [2024-07-21 12:06:00.513852] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:24:05.932 [2024-07-21 12:06:04.482277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.932 [2024-07-21 12:06:04.482339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:24:05.932 [2024-07-21 12:06:04.482353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3976.087 ms 00:24:05.932 [2024-07-21 12:06:04.482373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.932 [2024-07-21 12:06:04.493064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.932 [2024-07-21 12:06:04.493118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:05.932 [2024-07-21 12:06:04.493132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.611 ms 00:24:05.933 [2024-07-21 12:06:04.493142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.493189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.493204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:05.933 [2024-07-21 12:06:04.493212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:24:05.933 [2024-07-21 12:06:04.493220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.502917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.502988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:05.933 [2024-07-21 12:06:04.502999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.668 ms 00:24:05.933 [2024-07-21 12:06:04.503008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.503044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.503053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:05.933 [2024-07-21 12:06:04.503061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:05.933 [2024-07-21 12:06:04.503079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.503525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.503538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:05.933 [2024-07-21 12:06:04.503546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.404 ms 00:24:05.933 [2024-07-21 12:06:04.503555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.503601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.503618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:05.933 [2024-07-21 12:06:04.503626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:24:05.933 [2024-07-21 12:06:04.503646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.510530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.510574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:05.933 [2024-07-21 12:06:04.510583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.864 ms 00:24:05.933 [2024-07-21 12:06:04.510609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.517582] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:05.933 [2024-07-21 12:06:04.518634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.518661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:05.933 [2024-07-21 12:06:04.518672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.972 ms 00:24:05.933 [2024-07-21 12:06:04.518679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.545110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.545156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:24:05.933 [2024-07-21 12:06:04.545174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.452 ms 00:24:05.933 [2024-07-21 12:06:04.545200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.545274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.545282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:05.933 [2024-07-21 12:06:04.545292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:24:05.933 [2024-07-21 12:06:04.545299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.548267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.548304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:24:05.933 [2024-07-21 12:06:04.548317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.940 ms 00:24:05.933 [2024-07-21 12:06:04.548327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.550953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.550983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:24:05.933 [2024-07-21 12:06:04.550994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.605 ms 00:24:05.933 [2024-07-21 12:06:04.551001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.551255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.551272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:05.933 [2024-07-21 12:06:04.551283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.235 ms 00:24:05.933 [2024-07-21 12:06:04.551290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.595392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.595444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:24:05.933 [2024-07-21 12:06:04.595460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.160 ms 00:24:05.933 [2024-07-21 12:06:04.595469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.599718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.599756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:24:05.933 [2024-07-21 12:06:04.599769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.218 ms 00:24:05.933 [2024-07-21 12:06:04.599776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.602943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.602971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:24:05.933 [2024-07-21 12:06:04.602983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.127 ms 00:24:05.933 [2024-07-21 12:06:04.603005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.606328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.606358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:24:05.933 [2024-07-21 12:06:04.606371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.293 ms 00:24:05.933 [2024-07-21 12:06:04.606394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.606434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.606442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:05.933 [2024-07-21 12:06:04.606461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:05.933 [2024-07-21 12:06:04.606468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.606531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:05.933 [2024-07-21 12:06:04.606540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:05.933 [2024-07-21 12:06:04.606549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:24:05.933 [2024-07-21 12:06:04.606556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:05.933 [2024-07-21 12:06:04.607557] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4114.186 ms, result 0 00:24:05.933 { 00:24:05.933 "name": "ftl", 00:24:05.933 "uuid": "19130d7f-9734-4b81-99ba-8321fb54dc37" 00:24:05.933 } 00:24:05.933 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:24:05.933 [2024-07-21 12:06:04.795839] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.191 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:24:06.191 12:06:04 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:24:06.450 [2024-07-21 12:06:05.159508] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:06.450 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:24:06.710 [2024-07-21 12:06:05.335523] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:06.710 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:24:06.968 Fill FTL, iteration 1 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:06.968 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:24:06.969 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=94357 00:24:06.969 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:24:06.969 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:24:06.969 12:06:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 94357 /var/tmp/spdk.tgt.sock 00:24:06.969 12:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 94357 ']' 00:24:06.969 12:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:24:06.969 12:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:06.969 12:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:24:06.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:24:06.969 12:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:06.969 12:06:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:06.969 [2024-07-21 12:06:05.754797] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:06.969 [2024-07-21 12:06:05.754995] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94357 ] 00:24:07.226 [2024-07-21 12:06:05.923178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.226 [2024-07-21 12:06:05.995964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.792 12:06:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:07.792 12:06:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:07.792 12:06:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:24:08.051 ftln1 00:24:08.051 12:06:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:24:08.051 12:06:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:24:08.051 12:06:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:24:08.051 12:06:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 94357 00:24:08.051 12:06:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 94357 ']' 00:24:08.051 12:06:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 94357 00:24:08.051 12:06:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:24:08.310 12:06:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:08.310 12:06:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94357 00:24:08.310 killing process with pid 94357 00:24:08.310 12:06:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:24:08.310 12:06:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:24:08.310 12:06:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94357' 00:24:08.310 12:06:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 94357 00:24:08.310 12:06:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 94357 00:24:08.569 12:06:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:24:08.569 12:06:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:24:08.569 [2024-07-21 12:06:07.401654] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:08.569 [2024-07-21 12:06:07.401770] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94390 ] 00:24:08.828 [2024-07-21 12:06:07.561274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.828 [2024-07-21 12:06:07.609262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.528  Copying: 248/1024 [MB] (248 MBps) Copying: 492/1024 [MB] (244 MBps) Copying: 737/1024 [MB] (245 MBps) Copying: 985/1024 [MB] (248 MBps) Copying: 1024/1024 [MB] (average 246 MBps) 00:24:13.528 00:24:13.528 12:06:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:24:13.528 12:06:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:24:13.528 Calculate MD5 checksum, iteration 1 00:24:13.528 12:06:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:13.528 12:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:13.528 12:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:13.528 12:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:13.528 12:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:13.528 12:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:13.528 [2024-07-21 12:06:12.297361] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:13.528 [2024-07-21 12:06:12.297626] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94445 ] 00:24:13.787 [2024-07-21 12:06:12.468684] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.787 [2024-07-21 12:06:12.518078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.679  Copying: 679/1024 [MB] (679 MBps) Copying: 1024/1024 [MB] (average 656 MBps) 00:24:15.679 00:24:15.679 12:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:24:15.679 12:06:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:17.583 12:06:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:24:17.583 Fill FTL, iteration 2 00:24:17.583 12:06:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=23f30feb0c50df5cfdf6a16bcc28ae11 00:24:17.583 12:06:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:24:17.583 12:06:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:17.583 12:06:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:24:17.583 12:06:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:17.583 12:06:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:17.583 12:06:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:17.583 12:06:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:17.583 12:06:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:17.583 12:06:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:24:17.841 [2024-07-21 12:06:16.466786] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:17.841 [2024-07-21 12:06:16.467117] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94490 ] 00:24:17.841 [2024-07-21 12:06:16.626687] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.841 [2024-07-21 12:06:16.674523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.545  Copying: 250/1024 [MB] (250 MBps) Copying: 502/1024 [MB] (252 MBps) Copying: 751/1024 [MB] (249 MBps) Copying: 996/1024 [MB] (245 MBps) Copying: 1024/1024 [MB] (average 248 MBps) 00:24:22.545 00:24:22.545 12:06:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:24:22.545 12:06:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:24:22.545 Calculate MD5 checksum, iteration 2 00:24:22.545 12:06:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:22.545 12:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:22.545 12:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:22.545 12:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:22.545 12:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:22.545 12:06:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:22.545 [2024-07-21 12:06:21.303941] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:22.545 [2024-07-21 12:06:21.304115] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94543 ] 00:24:22.804 [2024-07-21 12:06:21.463667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.804 [2024-07-21 12:06:21.512539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.317  Copying: 667/1024 [MB] (667 MBps) Copying: 1024/1024 [MB] (average 660 MBps) 00:24:25.317 00:24:25.317 12:06:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:24:25.317 12:06:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:27.285 12:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:24:27.285 12:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=34eca62b611c1693b8f41bfa0a51cb6c 00:24:27.285 12:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:24:27.285 12:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:24:27.285 12:06:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:27.285 [2024-07-21 12:06:26.021254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:27.285 [2024-07-21 12:06:26.021384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:27.285 [2024-07-21 12:06:26.021448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:24:27.285 [2024-07-21 12:06:26.021469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:27.285 [2024-07-21 12:06:26.021511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:27.285 [2024-07-21 12:06:26.021536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:27.285 [2024-07-21 12:06:26.021584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:27.285 [2024-07-21 12:06:26.021659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:27.285 [2024-07-21 12:06:26.021695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:27.285 [2024-07-21 12:06:26.021739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:27.285 [2024-07-21 12:06:26.021761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:27.285 [2024-07-21 12:06:26.021792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:27.285 [2024-07-21 12:06:26.021900] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.612 ms, result 0 00:24:27.285 true 00:24:27.285 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:27.544 { 00:24:27.544 "name": "ftl", 00:24:27.544 "properties": [ 00:24:27.544 { 00:24:27.544 "name": "superblock_version", 00:24:27.544 "value": 5, 00:24:27.544 "read-only": true 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "name": "base_device", 00:24:27.544 "bands": [ 00:24:27.544 { 00:24:27.544 "id": 0, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 1, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 2, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 3, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 4, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 5, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 6, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 7, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 8, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 9, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 10, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 11, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 12, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 13, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 14, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 15, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 16, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 17, 00:24:27.544 "state": "FREE", 00:24:27.544 "validity": 0.0 00:24:27.544 } 00:24:27.544 ], 00:24:27.544 "read-only": true 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "name": "cache_device", 00:24:27.544 "type": "bdev", 00:24:27.544 "chunks": [ 00:24:27.544 { 00:24:27.544 "id": 0, 00:24:27.544 "state": "INACTIVE", 00:24:27.544 "utilization": 0.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 1, 00:24:27.544 "state": "CLOSED", 00:24:27.544 "utilization": 1.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 2, 00:24:27.544 "state": "CLOSED", 00:24:27.544 "utilization": 1.0 00:24:27.544 }, 00:24:27.544 { 00:24:27.544 "id": 3, 00:24:27.544 "state": "OPEN", 00:24:27.544 "utilization": 0.001953125 00:24:27.544 }, 00:24:27.545 { 00:24:27.545 "id": 4, 00:24:27.545 "state": "OPEN", 00:24:27.545 "utilization": 0.0 00:24:27.545 } 00:24:27.545 ], 00:24:27.545 "read-only": true 00:24:27.545 }, 00:24:27.545 { 00:24:27.545 "name": "verbose_mode", 00:24:27.545 "value": true, 00:24:27.545 "unit": "", 00:24:27.545 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:27.545 }, 00:24:27.545 { 00:24:27.545 "name": "prep_upgrade_on_shutdown", 00:24:27.545 "value": false, 00:24:27.545 "unit": "", 00:24:27.545 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:27.545 } 00:24:27.545 ] 00:24:27.545 } 00:24:27.545 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:24:27.545 [2024-07-21 12:06:26.364939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:27.545 [2024-07-21 12:06:26.365040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:27.545 [2024-07-21 12:06:26.365076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:24:27.545 [2024-07-21 12:06:26.365112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:27.545 [2024-07-21 12:06:26.365164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:27.545 [2024-07-21 12:06:26.365186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:27.545 [2024-07-21 12:06:26.365216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:27.545 [2024-07-21 12:06:26.365235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:27.545 [2024-07-21 12:06:26.365266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:27.545 [2024-07-21 12:06:26.365300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:27.545 [2024-07-21 12:06:26.365325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:27.545 [2024-07-21 12:06:26.365345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:27.545 [2024-07-21 12:06:26.365463] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.516 ms, result 0 00:24:27.545 true 00:24:27.545 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:24:27.545 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:24:27.545 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:27.804 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:24:27.804 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:24:27.804 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:28.063 [2024-07-21 12:06:26.772471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:28.063 [2024-07-21 12:06:26.772587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:28.063 [2024-07-21 12:06:26.772618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:24:28.063 [2024-07-21 12:06:26.772637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:28.063 [2024-07-21 12:06:26.772675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:28.063 [2024-07-21 12:06:26.772696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:28.063 [2024-07-21 12:06:26.772714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:28.063 [2024-07-21 12:06:26.772731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:28.063 [2024-07-21 12:06:26.772770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:28.063 [2024-07-21 12:06:26.772791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:28.063 [2024-07-21 12:06:26.772849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:28.063 [2024-07-21 12:06:26.772881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:28.063 [2024-07-21 12:06:26.772955] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.471 ms, result 0 00:24:28.063 true 00:24:28.063 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:28.322 { 00:24:28.322 "name": "ftl", 00:24:28.322 "properties": [ 00:24:28.322 { 00:24:28.322 "name": "superblock_version", 00:24:28.322 "value": 5, 00:24:28.322 "read-only": true 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "name": "base_device", 00:24:28.322 "bands": [ 00:24:28.322 { 00:24:28.322 "id": 0, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 1, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 2, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 3, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 4, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 5, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 6, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 7, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 8, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 9, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 10, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 11, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 12, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 13, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 14, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 15, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 16, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 17, 00:24:28.322 "state": "FREE", 00:24:28.322 "validity": 0.0 00:24:28.322 } 00:24:28.322 ], 00:24:28.322 "read-only": true 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "name": "cache_device", 00:24:28.322 "type": "bdev", 00:24:28.322 "chunks": [ 00:24:28.322 { 00:24:28.322 "id": 0, 00:24:28.322 "state": "INACTIVE", 00:24:28.322 "utilization": 0.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 1, 00:24:28.322 "state": "CLOSED", 00:24:28.322 "utilization": 1.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 2, 00:24:28.322 "state": "CLOSED", 00:24:28.322 "utilization": 1.0 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 3, 00:24:28.322 "state": "OPEN", 00:24:28.322 "utilization": 0.001953125 00:24:28.322 }, 00:24:28.322 { 00:24:28.322 "id": 4, 00:24:28.322 "state": "OPEN", 00:24:28.322 "utilization": 0.0 00:24:28.322 } 00:24:28.322 ], 00:24:28.322 "read-only": true 00:24:28.322 }, 00:24:28.322 { 00:24:28.323 "name": "verbose_mode", 00:24:28.323 "value": true, 00:24:28.323 "unit": "", 00:24:28.323 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:28.323 }, 00:24:28.323 { 00:24:28.323 "name": "prep_upgrade_on_shutdown", 00:24:28.323 "value": true, 00:24:28.323 "unit": "", 00:24:28.323 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:28.323 } 00:24:28.323 ] 00:24:28.323 } 00:24:28.323 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:24:28.323 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94242 ]] 00:24:28.323 12:06:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94242 00:24:28.323 12:06:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 94242 ']' 00:24:28.323 12:06:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 94242 00:24:28.323 12:06:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:24:28.323 12:06:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:28.323 12:06:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94242 00:24:28.323 killing process with pid 94242 00:24:28.323 12:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:28.323 12:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:28.323 12:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94242' 00:24:28.323 12:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 94242 00:24:28.323 12:06:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 94242 00:24:28.323 [2024-07-21 12:06:27.147217] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:24:28.323 [2024-07-21 12:06:27.153227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:28.323 [2024-07-21 12:06:27.153264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:24:28.323 [2024-07-21 12:06:27.153275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:28.323 [2024-07-21 12:06:27.153283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:28.323 [2024-07-21 12:06:27.153303] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:24:28.323 [2024-07-21 12:06:27.153967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:28.323 [2024-07-21 12:06:27.153981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:24:28.323 [2024-07-21 12:06:27.153990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.654 ms 00:24:28.323 [2024-07-21 12:06:27.153997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.458 [2024-07-21 12:06:34.420982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.458 [2024-07-21 12:06:34.421058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:24:36.458 [2024-07-21 12:06:34.421074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7280.970 ms 00:24:36.458 [2024-07-21 12:06:34.421082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.458 [2024-07-21 12:06:34.422191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.458 [2024-07-21 12:06:34.422213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:24:36.458 [2024-07-21 12:06:34.422223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.092 ms 00:24:36.458 [2024-07-21 12:06:34.422231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.458 [2024-07-21 12:06:34.423189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.458 [2024-07-21 12:06:34.423209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:24:36.458 [2024-07-21 12:06:34.423218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.933 ms 00:24:36.458 [2024-07-21 12:06:34.423226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.458 [2024-07-21 12:06:34.424974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.458 [2024-07-21 12:06:34.425010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:24:36.458 [2024-07-21 12:06:34.425020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.644 ms 00:24:36.458 [2024-07-21 12:06:34.425027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.458 [2024-07-21 12:06:34.427598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.458 [2024-07-21 12:06:34.427634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:24:36.458 [2024-07-21 12:06:34.427644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.550 ms 00:24:36.458 [2024-07-21 12:06:34.427652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.458 [2024-07-21 12:06:34.427715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.458 [2024-07-21 12:06:34.427725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:24:36.458 [2024-07-21 12:06:34.427733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:24:36.458 [2024-07-21 12:06:34.427745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.458 [2024-07-21 12:06:34.428746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.458 [2024-07-21 12:06:34.428796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:24:36.458 [2024-07-21 12:06:34.428807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.988 ms 00:24:36.458 [2024-07-21 12:06:34.428824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.458 [2024-07-21 12:06:34.429837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.458 [2024-07-21 12:06:34.429862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:24:36.458 [2024-07-21 12:06:34.429870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.977 ms 00:24:36.458 [2024-07-21 12:06:34.429877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.458 [2024-07-21 12:06:34.430953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.458 [2024-07-21 12:06:34.430981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:24:36.458 [2024-07-21 12:06:34.430990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.054 ms 00:24:36.458 [2024-07-21 12:06:34.430997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.458 [2024-07-21 12:06:34.432006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.458 [2024-07-21 12:06:34.432051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:24:36.458 [2024-07-21 12:06:34.432060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.956 ms 00:24:36.458 [2024-07-21 12:06:34.432066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.458 [2024-07-21 12:06:34.432094] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:24:36.458 [2024-07-21 12:06:34.432118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:24:36.458 [2024-07-21 12:06:34.432135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:24:36.459 [2024-07-21 12:06:34.432143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:24:36.459 [2024-07-21 12:06:34.432151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:36.459 [2024-07-21 12:06:34.432268] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:24:36.459 [2024-07-21 12:06:34.432275] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 19130d7f-9734-4b81-99ba-8321fb54dc37 00:24:36.459 [2024-07-21 12:06:34.432283] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:24:36.459 [2024-07-21 12:06:34.432290] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:24:36.459 [2024-07-21 12:06:34.432298] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:24:36.459 [2024-07-21 12:06:34.432317] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:24:36.459 [2024-07-21 12:06:34.432324] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:24:36.459 [2024-07-21 12:06:34.432332] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:24:36.459 [2024-07-21 12:06:34.432343] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:24:36.459 [2024-07-21 12:06:34.432350] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:24:36.459 [2024-07-21 12:06:34.432356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:24:36.459 [2024-07-21 12:06:34.432364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.459 [2024-07-21 12:06:34.432372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:24:36.459 [2024-07-21 12:06:34.432380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.271 ms 00:24:36.459 [2024-07-21 12:06:34.432395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.434308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.459 [2024-07-21 12:06:34.434327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:24:36.459 [2024-07-21 12:06:34.434336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.900 ms 00:24:36.459 [2024-07-21 12:06:34.434344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.434461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:36.459 [2024-07-21 12:06:34.434469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:24:36.459 [2024-07-21 12:06:34.434477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.095 ms 00:24:36.459 [2024-07-21 12:06:34.434484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.440898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:36.459 [2024-07-21 12:06:34.440927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:36.459 [2024-07-21 12:06:34.440936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:36.459 [2024-07-21 12:06:34.440949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.440976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:36.459 [2024-07-21 12:06:34.440984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:36.459 [2024-07-21 12:06:34.440991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:36.459 [2024-07-21 12:06:34.440998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.441072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:36.459 [2024-07-21 12:06:34.441084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:36.459 [2024-07-21 12:06:34.441092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:36.459 [2024-07-21 12:06:34.441098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.441119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:36.459 [2024-07-21 12:06:34.441126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:36.459 [2024-07-21 12:06:34.441143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:36.459 [2024-07-21 12:06:34.441150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.454507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:36.459 [2024-07-21 12:06:34.454553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:36.459 [2024-07-21 12:06:34.454564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:36.459 [2024-07-21 12:06:34.454572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.463160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:36.459 [2024-07-21 12:06:34.463266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:36.459 [2024-07-21 12:06:34.463316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:36.459 [2024-07-21 12:06:34.463359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.463464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:36.459 [2024-07-21 12:06:34.463496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:36.459 [2024-07-21 12:06:34.463537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:36.459 [2024-07-21 12:06:34.463562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.463615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:36.459 [2024-07-21 12:06:34.463663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:36.459 [2024-07-21 12:06:34.463697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:36.459 [2024-07-21 12:06:34.463728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.463846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:36.459 [2024-07-21 12:06:34.463893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:36.459 [2024-07-21 12:06:34.463927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:36.459 [2024-07-21 12:06:34.463952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.464023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:36.459 [2024-07-21 12:06:34.464066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:24:36.459 [2024-07-21 12:06:34.464115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:36.459 [2024-07-21 12:06:34.464146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.464214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:36.459 [2024-07-21 12:06:34.464256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:36.459 [2024-07-21 12:06:34.464288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:36.459 [2024-07-21 12:06:34.464320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.464396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:24:36.459 [2024-07-21 12:06:34.464441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:36.459 [2024-07-21 12:06:34.464471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:24:36.459 [2024-07-21 12:06:34.464503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:36.459 [2024-07-21 12:06:34.464692] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7325.497 ms, result 0 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94728 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94728 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 94728 ']' 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:37.838 12:06:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:37.838 [2024-07-21 12:06:36.650785] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:37.838 [2024-07-21 12:06:36.651465] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94728 ] 00:24:38.096 [2024-07-21 12:06:36.792990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.096 [2024-07-21 12:06:36.843126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.355 [2024-07-21 12:06:37.125005] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:38.355 [2024-07-21 12:06:37.125169] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:38.613 [2024-07-21 12:06:37.261095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.613 [2024-07-21 12:06:37.261238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:38.613 [2024-07-21 12:06:37.261273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:38.613 [2024-07-21 12:06:37.261294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.613 [2024-07-21 12:06:37.261395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.613 [2024-07-21 12:06:37.261424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:38.613 [2024-07-21 12:06:37.261455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:24:38.613 [2024-07-21 12:06:37.261487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.613 [2024-07-21 12:06:37.261575] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:38.613 [2024-07-21 12:06:37.261901] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:38.613 [2024-07-21 12:06:37.261972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.613 [2024-07-21 12:06:37.262023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:38.613 [2024-07-21 12:06:37.262057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.405 ms 00:24:38.613 [2024-07-21 12:06:37.262104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.613 [2024-07-21 12:06:37.263594] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:24:38.613 [2024-07-21 12:06:37.266016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.613 [2024-07-21 12:06:37.266086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:24:38.613 [2024-07-21 12:06:37.266120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.428 ms 00:24:38.613 [2024-07-21 12:06:37.266141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.613 [2024-07-21 12:06:37.266224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.613 [2024-07-21 12:06:37.266260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:24:38.613 [2024-07-21 12:06:37.266303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:24:38.613 [2024-07-21 12:06:37.266331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.613 [2024-07-21 12:06:37.272933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.613 [2024-07-21 12:06:37.273007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:38.613 [2024-07-21 12:06:37.273042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.526 ms 00:24:38.613 [2024-07-21 12:06:37.273079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.613 [2024-07-21 12:06:37.273154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.613 [2024-07-21 12:06:37.273205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:38.613 [2024-07-21 12:06:37.273232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:24:38.613 [2024-07-21 12:06:37.273254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.613 [2024-07-21 12:06:37.273320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.613 [2024-07-21 12:06:37.273330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:38.613 [2024-07-21 12:06:37.273339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:24:38.613 [2024-07-21 12:06:37.273346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.613 [2024-07-21 12:06:37.273372] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:38.613 [2024-07-21 12:06:37.274992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.613 [2024-07-21 12:06:37.275016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:38.613 [2024-07-21 12:06:37.275026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.631 ms 00:24:38.613 [2024-07-21 12:06:37.275056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.613 [2024-07-21 12:06:37.275097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.613 [2024-07-21 12:06:37.275105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:38.613 [2024-07-21 12:06:37.275112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:38.613 [2024-07-21 12:06:37.275119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.613 [2024-07-21 12:06:37.275142] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:24:38.613 [2024-07-21 12:06:37.275162] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:24:38.613 [2024-07-21 12:06:37.275208] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:24:38.613 [2024-07-21 12:06:37.275234] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:24:38.613 [2024-07-21 12:06:37.275346] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:38.613 [2024-07-21 12:06:37.275364] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:38.613 [2024-07-21 12:06:37.275374] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:24:38.613 [2024-07-21 12:06:37.275384] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:38.613 [2024-07-21 12:06:37.275393] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:38.613 [2024-07-21 12:06:37.275401] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:38.613 [2024-07-21 12:06:37.275408] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:38.613 [2024-07-21 12:06:37.275425] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:38.613 [2024-07-21 12:06:37.275434] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:38.613 [2024-07-21 12:06:37.275443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.613 [2024-07-21 12:06:37.275451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:38.613 [2024-07-21 12:06:37.275459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.304 ms 00:24:38.613 [2024-07-21 12:06:37.275474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.613 [2024-07-21 12:06:37.275546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.613 [2024-07-21 12:06:37.275553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:38.613 [2024-07-21 12:06:37.275562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:24:38.613 [2024-07-21 12:06:37.275569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.613 [2024-07-21 12:06:37.275654] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:38.613 [2024-07-21 12:06:37.275667] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:38.613 [2024-07-21 12:06:37.275676] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:38.613 [2024-07-21 12:06:37.275684] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.613 [2024-07-21 12:06:37.275691] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:38.613 [2024-07-21 12:06:37.275698] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:38.613 [2024-07-21 12:06:37.275707] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:38.613 [2024-07-21 12:06:37.275714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:38.613 [2024-07-21 12:06:37.275721] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:38.613 [2024-07-21 12:06:37.275728] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.613 [2024-07-21 12:06:37.275734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:38.613 [2024-07-21 12:06:37.275741] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:38.614 [2024-07-21 12:06:37.275747] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.614 [2024-07-21 12:06:37.275754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:38.614 [2024-07-21 12:06:37.275760] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:38.614 [2024-07-21 12:06:37.275767] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.614 [2024-07-21 12:06:37.275773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:38.614 [2024-07-21 12:06:37.275780] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:38.614 [2024-07-21 12:06:37.275786] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.614 [2024-07-21 12:06:37.275792] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:38.614 [2024-07-21 12:06:37.275799] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:38.614 [2024-07-21 12:06:37.275805] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:38.614 [2024-07-21 12:06:37.275814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:38.614 [2024-07-21 12:06:37.275821] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:38.614 [2024-07-21 12:06:37.275827] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:38.614 [2024-07-21 12:06:37.275844] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:38.614 [2024-07-21 12:06:37.275851] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:38.614 [2024-07-21 12:06:37.275858] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:38.614 [2024-07-21 12:06:37.275864] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:38.614 [2024-07-21 12:06:37.275870] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:38.614 [2024-07-21 12:06:37.275877] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:38.614 [2024-07-21 12:06:37.275884] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:38.614 [2024-07-21 12:06:37.275891] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:38.614 [2024-07-21 12:06:37.275897] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.614 [2024-07-21 12:06:37.275904] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:38.614 [2024-07-21 12:06:37.275910] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:38.614 [2024-07-21 12:06:37.275916] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.614 [2024-07-21 12:06:37.275923] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:38.614 [2024-07-21 12:06:37.275931] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:38.614 [2024-07-21 12:06:37.275937] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.614 [2024-07-21 12:06:37.275943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:38.614 [2024-07-21 12:06:37.275950] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:38.614 [2024-07-21 12:06:37.275956] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.614 [2024-07-21 12:06:37.275972] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:38.614 [2024-07-21 12:06:37.275980] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:38.614 [2024-07-21 12:06:37.275988] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:38.614 [2024-07-21 12:06:37.275995] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:38.614 [2024-07-21 12:06:37.276002] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:38.614 [2024-07-21 12:06:37.276009] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:38.614 [2024-07-21 12:06:37.276016] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:38.614 [2024-07-21 12:06:37.276023] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:38.614 [2024-07-21 12:06:37.276029] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:38.614 [2024-07-21 12:06:37.276036] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:38.614 [2024-07-21 12:06:37.276043] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:38.614 [2024-07-21 12:06:37.276058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:38.614 [2024-07-21 12:06:37.276067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:38.614 [2024-07-21 12:06:37.276074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:38.614 [2024-07-21 12:06:37.276081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:38.614 [2024-07-21 12:06:37.276088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:38.614 [2024-07-21 12:06:37.276096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:38.614 [2024-07-21 12:06:37.276102] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:38.614 [2024-07-21 12:06:37.276110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:38.614 [2024-07-21 12:06:37.276118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:38.614 [2024-07-21 12:06:37.276124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:38.614 [2024-07-21 12:06:37.276132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:38.614 [2024-07-21 12:06:37.276139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:38.614 [2024-07-21 12:06:37.276145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:38.614 [2024-07-21 12:06:37.276153] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:38.614 [2024-07-21 12:06:37.276160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:38.614 [2024-07-21 12:06:37.276166] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:38.614 [2024-07-21 12:06:37.276179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:38.614 [2024-07-21 12:06:37.276187] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:38.614 [2024-07-21 12:06:37.276194] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:38.614 [2024-07-21 12:06:37.276202] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:38.614 [2024-07-21 12:06:37.276209] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:38.614 [2024-07-21 12:06:37.276217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:38.614 [2024-07-21 12:06:37.276224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:38.614 [2024-07-21 12:06:37.276232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.616 ms 00:24:38.614 [2024-07-21 12:06:37.276238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:38.614 [2024-07-21 12:06:37.276292] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:24:38.614 [2024-07-21 12:06:37.276302] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:24:41.900 [2024-07-21 12:06:40.666545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.666675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:24:41.900 [2024-07-21 12:06:40.666715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3396.788 ms 00:24:41.900 [2024-07-21 12:06:40.666735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.677213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.677325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:41.900 [2024-07-21 12:06:40.677356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.383 ms 00:24:41.900 [2024-07-21 12:06:40.677384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.677450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.677480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:41.900 [2024-07-21 12:06:40.677514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:24:41.900 [2024-07-21 12:06:40.677533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.687242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.687346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:41.900 [2024-07-21 12:06:40.687382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.642 ms 00:24:41.900 [2024-07-21 12:06:40.687404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.687471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.687537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:41.900 [2024-07-21 12:06:40.687568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:41.900 [2024-07-21 12:06:40.687588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.688098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.688151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:41.900 [2024-07-21 12:06:40.688181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.418 ms 00:24:41.900 [2024-07-21 12:06:40.688201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.688277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.688309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:41.900 [2024-07-21 12:06:40.688342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:24:41.900 [2024-07-21 12:06:40.688362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.695126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.695198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:41.900 [2024-07-21 12:06:40.695231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.737 ms 00:24:41.900 [2024-07-21 12:06:40.695251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.697926] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:24:41.900 [2024-07-21 12:06:40.698012] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:24:41.900 [2024-07-21 12:06:40.698068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.698098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:24:41.900 [2024-07-21 12:06:40.698127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.667 ms 00:24:41.900 [2024-07-21 12:06:40.698148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.701545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.701607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:24:41.900 [2024-07-21 12:06:40.701673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.344 ms 00:24:41.900 [2024-07-21 12:06:40.701695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.703150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.703213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:24:41.900 [2024-07-21 12:06:40.703239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.406 ms 00:24:41.900 [2024-07-21 12:06:40.703259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.704570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.704636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:24:41.900 [2024-07-21 12:06:40.704678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.259 ms 00:24:41.900 [2024-07-21 12:06:40.704698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.705024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.705091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:41.900 [2024-07-21 12:06:40.705123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.226 ms 00:24:41.900 [2024-07-21 12:06:40.705147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.731764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.731919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:24:41.900 [2024-07-21 12:06:40.731952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.619 ms 00:24:41.900 [2024-07-21 12:06:40.731990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.737970] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:41.900 [2024-07-21 12:06:40.738692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.738739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:41.900 [2024-07-21 12:06:40.738770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.635 ms 00:24:41.900 [2024-07-21 12:06:40.738790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.738885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.900 [2024-07-21 12:06:40.738920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:24:41.900 [2024-07-21 12:06:40.738963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:24:41.900 [2024-07-21 12:06:40.738990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.900 [2024-07-21 12:06:40.739055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.901 [2024-07-21 12:06:40.739091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:41.901 [2024-07-21 12:06:40.739118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:24:41.901 [2024-07-21 12:06:40.739145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.901 [2024-07-21 12:06:40.739183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.901 [2024-07-21 12:06:40.739204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:41.901 [2024-07-21 12:06:40.739236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:41.901 [2024-07-21 12:06:40.739256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.901 [2024-07-21 12:06:40.739308] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:24:41.901 [2024-07-21 12:06:40.739340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.901 [2024-07-21 12:06:40.739378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:24:41.901 [2024-07-21 12:06:40.739416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:24:41.901 [2024-07-21 12:06:40.739443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.901 [2024-07-21 12:06:40.742872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.901 [2024-07-21 12:06:40.742936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:24:41.901 [2024-07-21 12:06:40.742966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.382 ms 00:24:41.901 [2024-07-21 12:06:40.743011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.901 [2024-07-21 12:06:40.743095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:41.901 [2024-07-21 12:06:40.743128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:41.901 [2024-07-21 12:06:40.743164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:24:41.901 [2024-07-21 12:06:40.743204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:41.901 [2024-07-21 12:06:40.744390] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3489.513 ms, result 0 00:24:41.901 [2024-07-21 12:06:40.759285] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.160 [2024-07-21 12:06:40.775234] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:42.160 [2024-07-21 12:06:40.783324] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:42.160 12:06:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:42.160 12:06:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:42.160 12:06:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:42.160 12:06:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:24:42.160 12:06:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:24:42.420 [2024-07-21 12:06:41.058984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:42.420 [2024-07-21 12:06:41.059031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:24:42.420 [2024-07-21 12:06:41.059044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:24:42.420 [2024-07-21 12:06:41.059067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:42.420 [2024-07-21 12:06:41.059102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:42.420 [2024-07-21 12:06:41.059110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:24:42.420 [2024-07-21 12:06:41.059118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:42.420 [2024-07-21 12:06:41.059124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:42.420 [2024-07-21 12:06:41.059153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:42.420 [2024-07-21 12:06:41.059162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:24:42.420 [2024-07-21 12:06:41.059170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:42.420 [2024-07-21 12:06:41.059176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:42.420 [2024-07-21 12:06:41.059238] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.280 ms, result 0 00:24:42.420 true 00:24:42.420 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:42.420 { 00:24:42.420 "name": "ftl", 00:24:42.420 "properties": [ 00:24:42.420 { 00:24:42.420 "name": "superblock_version", 00:24:42.420 "value": 5, 00:24:42.420 "read-only": true 00:24:42.420 }, 00:24:42.420 { 00:24:42.420 "name": "base_device", 00:24:42.420 "bands": [ 00:24:42.420 { 00:24:42.420 "id": 0, 00:24:42.420 "state": "CLOSED", 00:24:42.420 "validity": 1.0 00:24:42.420 }, 00:24:42.420 { 00:24:42.420 "id": 1, 00:24:42.420 "state": "CLOSED", 00:24:42.420 "validity": 1.0 00:24:42.420 }, 00:24:42.420 { 00:24:42.420 "id": 2, 00:24:42.420 "state": "CLOSED", 00:24:42.420 "validity": 0.007843137254901933 00:24:42.420 }, 00:24:42.420 { 00:24:42.420 "id": 3, 00:24:42.420 "state": "FREE", 00:24:42.420 "validity": 0.0 00:24:42.420 }, 00:24:42.420 { 00:24:42.420 "id": 4, 00:24:42.420 "state": "FREE", 00:24:42.420 "validity": 0.0 00:24:42.420 }, 00:24:42.420 { 00:24:42.420 "id": 5, 00:24:42.420 "state": "FREE", 00:24:42.420 "validity": 0.0 00:24:42.420 }, 00:24:42.420 { 00:24:42.420 "id": 6, 00:24:42.420 "state": "FREE", 00:24:42.420 "validity": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 7, 00:24:42.421 "state": "FREE", 00:24:42.421 "validity": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 8, 00:24:42.421 "state": "FREE", 00:24:42.421 "validity": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 9, 00:24:42.421 "state": "FREE", 00:24:42.421 "validity": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 10, 00:24:42.421 "state": "FREE", 00:24:42.421 "validity": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 11, 00:24:42.421 "state": "FREE", 00:24:42.421 "validity": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 12, 00:24:42.421 "state": "FREE", 00:24:42.421 "validity": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 13, 00:24:42.421 "state": "FREE", 00:24:42.421 "validity": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 14, 00:24:42.421 "state": "FREE", 00:24:42.421 "validity": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 15, 00:24:42.421 "state": "FREE", 00:24:42.421 "validity": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 16, 00:24:42.421 "state": "FREE", 00:24:42.421 "validity": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 17, 00:24:42.421 "state": "FREE", 00:24:42.421 "validity": 0.0 00:24:42.421 } 00:24:42.421 ], 00:24:42.421 "read-only": true 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "name": "cache_device", 00:24:42.421 "type": "bdev", 00:24:42.421 "chunks": [ 00:24:42.421 { 00:24:42.421 "id": 0, 00:24:42.421 "state": "INACTIVE", 00:24:42.421 "utilization": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 1, 00:24:42.421 "state": "OPEN", 00:24:42.421 "utilization": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 2, 00:24:42.421 "state": "OPEN", 00:24:42.421 "utilization": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 3, 00:24:42.421 "state": "FREE", 00:24:42.421 "utilization": 0.0 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "id": 4, 00:24:42.421 "state": "FREE", 00:24:42.421 "utilization": 0.0 00:24:42.421 } 00:24:42.421 ], 00:24:42.421 "read-only": true 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "name": "verbose_mode", 00:24:42.421 "value": true, 00:24:42.421 "unit": "", 00:24:42.421 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:24:42.421 }, 00:24:42.421 { 00:24:42.421 "name": "prep_upgrade_on_shutdown", 00:24:42.421 "value": false, 00:24:42.421 "unit": "", 00:24:42.421 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:24:42.421 } 00:24:42.421 ] 00:24:42.421 } 00:24:42.421 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:24:42.421 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:42.421 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:24:42.681 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:24:42.681 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:24:42.681 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:24:42.681 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:24:42.681 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:24:42.941 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:24:42.941 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:24:42.941 Validate MD5 checksum, iteration 1 00:24:42.941 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:24:42.941 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:24:42.941 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:24:42.941 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:42.941 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:24:42.941 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:42.941 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:42.941 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:42.941 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:42.941 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:42.941 12:06:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:42.941 [2024-07-21 12:06:41.721516] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:42.941 [2024-07-21 12:06:41.721692] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94791 ] 00:24:43.200 [2024-07-21 12:06:41.872648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.200 [2024-07-21 12:06:41.920490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.775  Copying: 709/1024 [MB] (709 MBps) Copying: 1024/1024 [MB] (average 703 MBps) 00:24:45.775 00:24:45.775 12:06:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:24:45.775 12:06:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:47.679 12:06:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:47.679 12:06:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=23f30feb0c50df5cfdf6a16bcc28ae11 00:24:47.679 12:06:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 23f30feb0c50df5cfdf6a16bcc28ae11 != \2\3\f\3\0\f\e\b\0\c\5\0\d\f\5\c\f\d\f\6\a\1\6\b\c\c\2\8\a\e\1\1 ]] 00:24:47.679 12:06:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:47.679 12:06:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:47.679 12:06:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:24:47.679 Validate MD5 checksum, iteration 2 00:24:47.679 12:06:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:47.679 12:06:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:47.679 12:06:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:47.679 12:06:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:47.679 12:06:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:47.679 12:06:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:24:47.679 [2024-07-21 12:06:46.213688] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:47.679 [2024-07-21 12:06:46.213913] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94847 ] 00:24:47.679 [2024-07-21 12:06:46.373454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.679 [2024-07-21 12:06:46.419233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.642  Copying: 703/1024 [MB] (703 MBps) Copying: 1024/1024 [MB] (average 707 MBps) 00:24:54.642 00:24:54.642 12:06:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:24:54.642 12:06:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=34eca62b611c1693b8f41bfa0a51cb6c 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 34eca62b611c1693b8f41bfa0a51cb6c != \3\4\e\c\a\6\2\b\6\1\1\c\1\6\9\3\b\8\f\4\1\b\f\a\0\a\5\1\c\b\6\c ]] 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 94728 ]] 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 94728 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=94937 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 94937 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@827 -- # '[' -z 94937 ']' 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:55.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:55.576 12:06:54 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:55.576 [2024-07-21 12:06:54.413590] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:55.576 [2024-07-21 12:06:54.413716] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94937 ] 00:24:55.834 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 826: 94728 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:24:55.834 [2024-07-21 12:06:54.580719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.834 [2024-07-21 12:06:54.625311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.092 [2024-07-21 12:06:54.901604] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:56.092 [2024-07-21 12:06:54.901673] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:24:56.350 [2024-07-21 12:06:55.037703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.351 [2024-07-21 12:06:55.037832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:24:56.351 [2024-07-21 12:06:55.037869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:24:56.351 [2024-07-21 12:06:55.037892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.351 [2024-07-21 12:06:55.037967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.351 [2024-07-21 12:06:55.038021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:24:56.351 [2024-07-21 12:06:55.038034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:24:56.351 [2024-07-21 12:06:55.038049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.351 [2024-07-21 12:06:55.038079] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:24:56.351 [2024-07-21 12:06:55.038345] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:24:56.351 [2024-07-21 12:06:55.038365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.351 [2024-07-21 12:06:55.038374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:24:56.351 [2024-07-21 12:06:55.038384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.294 ms 00:24:56.351 [2024-07-21 12:06:55.038401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.351 [2024-07-21 12:06:55.038701] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:24:56.351 [2024-07-21 12:06:55.042775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.351 [2024-07-21 12:06:55.042829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:24:56.351 [2024-07-21 12:06:55.042845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.083 ms 00:24:56.351 [2024-07-21 12:06:55.042852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.351 [2024-07-21 12:06:55.043836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.351 [2024-07-21 12:06:55.043880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:24:56.351 [2024-07-21 12:06:55.043890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:24:56.351 [2024-07-21 12:06:55.043897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.351 [2024-07-21 12:06:55.044149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.351 [2024-07-21 12:06:55.044164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:24:56.351 [2024-07-21 12:06:55.044173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.196 ms 00:24:56.351 [2024-07-21 12:06:55.044180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.351 [2024-07-21 12:06:55.044218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.351 [2024-07-21 12:06:55.044229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:24:56.351 [2024-07-21 12:06:55.044245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:24:56.351 [2024-07-21 12:06:55.044260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.351 [2024-07-21 12:06:55.044294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.351 [2024-07-21 12:06:55.044302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:24:56.351 [2024-07-21 12:06:55.044317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:24:56.351 [2024-07-21 12:06:55.044325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.351 [2024-07-21 12:06:55.044346] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:24:56.351 [2024-07-21 12:06:55.045117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.351 [2024-07-21 12:06:55.045130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:24:56.351 [2024-07-21 12:06:55.045141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.778 ms 00:24:56.351 [2024-07-21 12:06:55.045151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.351 [2024-07-21 12:06:55.045180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.351 [2024-07-21 12:06:55.045198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:24:56.351 [2024-07-21 12:06:55.045213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:56.351 [2024-07-21 12:06:55.045220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.351 [2024-07-21 12:06:55.045239] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:24:56.351 [2024-07-21 12:06:55.045259] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:24:56.351 [2024-07-21 12:06:55.045297] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:24:56.351 [2024-07-21 12:06:55.045316] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x168 bytes 00:24:56.351 [2024-07-21 12:06:55.045400] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:24:56.351 [2024-07-21 12:06:55.045415] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:24:56.351 [2024-07-21 12:06:55.045425] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x168 bytes 00:24:56.351 [2024-07-21 12:06:55.045434] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:24:56.351 [2024-07-21 12:06:55.045443] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:24:56.351 [2024-07-21 12:06:55.045450] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:24:56.351 [2024-07-21 12:06:55.045460] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:24:56.351 [2024-07-21 12:06:55.045468] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:24:56.351 [2024-07-21 12:06:55.045475] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:24:56.351 [2024-07-21 12:06:55.045485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.351 [2024-07-21 12:06:55.045492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:24:56.351 [2024-07-21 12:06:55.045499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.247 ms 00:24:56.351 [2024-07-21 12:06:55.045506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.351 [2024-07-21 12:06:55.045574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.351 [2024-07-21 12:06:55.045582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:24:56.351 [2024-07-21 12:06:55.045589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:24:56.351 [2024-07-21 12:06:55.045596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.351 [2024-07-21 12:06:55.045677] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:24:56.351 [2024-07-21 12:06:55.045697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:24:56.351 [2024-07-21 12:06:55.045704] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:56.351 [2024-07-21 12:06:55.045712] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:56.351 [2024-07-21 12:06:55.045722] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:24:56.351 [2024-07-21 12:06:55.045729] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:24:56.351 [2024-07-21 12:06:55.045739] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:24:56.351 [2024-07-21 12:06:55.045746] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:24:56.351 [2024-07-21 12:06:55.045753] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:24:56.351 [2024-07-21 12:06:55.045759] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:56.351 [2024-07-21 12:06:55.045766] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:24:56.351 [2024-07-21 12:06:55.045773] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:24:56.351 [2024-07-21 12:06:55.045779] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:56.351 [2024-07-21 12:06:55.045786] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:24:56.351 [2024-07-21 12:06:55.045792] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:24:56.351 [2024-07-21 12:06:55.045799] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:56.351 [2024-07-21 12:06:55.045805] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:24:56.351 [2024-07-21 12:06:55.045812] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:24:56.351 [2024-07-21 12:06:55.045830] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:56.351 [2024-07-21 12:06:55.045963] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:24:56.351 [2024-07-21 12:06:55.045975] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:24:56.351 [2024-07-21 12:06:55.045982] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:56.351 [2024-07-21 12:06:55.045988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:24:56.352 [2024-07-21 12:06:55.045995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:24:56.352 [2024-07-21 12:06:55.046001] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:56.352 [2024-07-21 12:06:55.046008] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:24:56.352 [2024-07-21 12:06:55.046015] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:24:56.352 [2024-07-21 12:06:55.046021] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:56.352 [2024-07-21 12:06:55.046027] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:24:56.352 [2024-07-21 12:06:55.046034] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:24:56.352 [2024-07-21 12:06:55.046040] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:24:56.352 [2024-07-21 12:06:55.046048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:24:56.352 [2024-07-21 12:06:55.046055] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:24:56.352 [2024-07-21 12:06:55.046062] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:56.352 [2024-07-21 12:06:55.046068] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:24:56.352 [2024-07-21 12:06:55.046074] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:24:56.352 [2024-07-21 12:06:55.046082] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:56.352 [2024-07-21 12:06:55.046089] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:24:56.352 [2024-07-21 12:06:55.046097] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:24:56.352 [2024-07-21 12:06:55.046103] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:56.352 [2024-07-21 12:06:55.046110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:24:56.352 [2024-07-21 12:06:55.046126] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:24:56.352 [2024-07-21 12:06:55.046132] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:56.352 [2024-07-21 12:06:55.046139] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:24:56.352 [2024-07-21 12:06:55.046147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:24:56.352 [2024-07-21 12:06:55.046155] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:24:56.352 [2024-07-21 12:06:55.046161] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:24:56.352 [2024-07-21 12:06:55.046169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:24:56.352 [2024-07-21 12:06:55.046176] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:24:56.352 [2024-07-21 12:06:55.046183] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:24:56.352 [2024-07-21 12:06:55.046189] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:24:56.352 [2024-07-21 12:06:55.046196] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:24:56.352 [2024-07-21 12:06:55.046207] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:24:56.352 [2024-07-21 12:06:55.046216] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:24:56.352 [2024-07-21 12:06:55.046232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:56.352 [2024-07-21 12:06:55.046241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:24:56.352 [2024-07-21 12:06:55.046249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:24:56.352 [2024-07-21 12:06:55.046257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:24:56.352 [2024-07-21 12:06:55.046264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:24:56.352 [2024-07-21 12:06:55.046271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:24:56.352 [2024-07-21 12:06:55.046278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:24:56.352 [2024-07-21 12:06:55.046285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:24:56.352 [2024-07-21 12:06:55.046292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:24:56.352 [2024-07-21 12:06:55.046299] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:24:56.352 [2024-07-21 12:06:55.046306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:24:56.352 [2024-07-21 12:06:55.046313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:24:56.352 [2024-07-21 12:06:55.046320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:24:56.352 [2024-07-21 12:06:55.046327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:24:56.352 [2024-07-21 12:06:55.046336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:24:56.352 [2024-07-21 12:06:55.046343] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:24:56.352 [2024-07-21 12:06:55.046354] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:56.352 [2024-07-21 12:06:55.046362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:56.352 [2024-07-21 12:06:55.046369] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:24:56.352 [2024-07-21 12:06:55.046377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:24:56.352 [2024-07-21 12:06:55.046384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:24:56.352 [2024-07-21 12:06:55.046392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.352 [2024-07-21 12:06:55.046399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:24:56.352 [2024-07-21 12:06:55.046407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.768 ms 00:24:56.352 [2024-07-21 12:06:55.046413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.352 [2024-07-21 12:06:55.055712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.352 [2024-07-21 12:06:55.055742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:24:56.352 [2024-07-21 12:06:55.055753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.249 ms 00:24:56.352 [2024-07-21 12:06:55.055760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.352 [2024-07-21 12:06:55.055799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.352 [2024-07-21 12:06:55.055811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:24:56.352 [2024-07-21 12:06:55.055820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:24:56.352 [2024-07-21 12:06:55.055854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.352 [2024-07-21 12:06:55.065483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.352 [2024-07-21 12:06:55.065523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:24:56.352 [2024-07-21 12:06:55.065538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.579 ms 00:24:56.352 [2024-07-21 12:06:55.065546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.352 [2024-07-21 12:06:55.065589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.352 [2024-07-21 12:06:55.065598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:24:56.352 [2024-07-21 12:06:55.065606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:24:56.352 [2024-07-21 12:06:55.065613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.352 [2024-07-21 12:06:55.065687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.352 [2024-07-21 12:06:55.065697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:24:56.352 [2024-07-21 12:06:55.065716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:24:56.352 [2024-07-21 12:06:55.065723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.352 [2024-07-21 12:06:55.065759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.352 [2024-07-21 12:06:55.065778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:24:56.352 [2024-07-21 12:06:55.065786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:24:56.352 [2024-07-21 12:06:55.065794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.352 [2024-07-21 12:06:55.072737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.352 [2024-07-21 12:06:55.072774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:24:56.353 [2024-07-21 12:06:55.072789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.935 ms 00:24:56.353 [2024-07-21 12:06:55.072805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.353 [2024-07-21 12:06:55.072931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.353 [2024-07-21 12:06:55.072942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:24:56.353 [2024-07-21 12:06:55.072951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:56.353 [2024-07-21 12:06:55.072965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.353 [2024-07-21 12:06:55.087325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.353 [2024-07-21 12:06:55.087366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:24:56.353 [2024-07-21 12:06:55.087377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.369 ms 00:24:56.353 [2024-07-21 12:06:55.087386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.353 [2024-07-21 12:06:55.088913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.353 [2024-07-21 12:06:55.088949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:24:56.353 [2024-07-21 12:06:55.088962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.341 ms 00:24:56.353 [2024-07-21 12:06:55.088976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.353 [2024-07-21 12:06:55.110664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.353 [2024-07-21 12:06:55.110728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:24:56.353 [2024-07-21 12:06:55.110743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.691 ms 00:24:56.353 [2024-07-21 12:06:55.110764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.353 [2024-07-21 12:06:55.110960] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:24:56.353 [2024-07-21 12:06:55.111076] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:24:56.353 [2024-07-21 12:06:55.111193] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:24:56.353 [2024-07-21 12:06:55.111320] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:24:56.353 [2024-07-21 12:06:55.111329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.353 [2024-07-21 12:06:55.111340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:24:56.353 [2024-07-21 12:06:55.111350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.485 ms 00:24:56.353 [2024-07-21 12:06:55.111357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.353 [2024-07-21 12:06:55.111427] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:24:56.353 [2024-07-21 12:06:55.111438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.353 [2024-07-21 12:06:55.111446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:24:56.353 [2024-07-21 12:06:55.111464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:24:56.353 [2024-07-21 12:06:55.111472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.353 [2024-07-21 12:06:55.114379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.353 [2024-07-21 12:06:55.114415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:24:56.353 [2024-07-21 12:06:55.114438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.890 ms 00:24:56.353 [2024-07-21 12:06:55.114461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.353 [2024-07-21 12:06:55.115107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:56.353 [2024-07-21 12:06:55.115125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:24:56.353 [2024-07-21 12:06:55.115134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:24:56.353 [2024-07-21 12:06:55.115140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:56.353 [2024-07-21 12:06:55.115368] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:24:56.919 [2024-07-21 12:06:55.714671] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:24:56.919 [2024-07-21 12:06:55.714860] ftl_nv_cache.c:2471:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:24:57.486 [2024-07-21 12:06:56.320419] ftl_nv_cache.c:2408:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:24:57.486 [2024-07-21 12:06:56.320528] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:57.486 [2024-07-21 12:06:56.320556] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:24:57.486 [2024-07-21 12:06:56.320568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.486 [2024-07-21 12:06:56.320577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:24:57.486 [2024-07-21 12:06:56.320588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1207.708 ms 00:24:57.486 [2024-07-21 12:06:56.320597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.486 [2024-07-21 12:06:56.320633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.486 [2024-07-21 12:06:56.320642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:24:57.486 [2024-07-21 12:06:56.320650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:24:57.486 [2024-07-21 12:06:56.320663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.486 [2024-07-21 12:06:56.327406] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:24:57.486 [2024-07-21 12:06:56.327509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.486 [2024-07-21 12:06:56.327520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:24:57.486 [2024-07-21 12:06:56.327529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.843 ms 00:24:57.486 [2024-07-21 12:06:56.327536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.486 [2024-07-21 12:06:56.328154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.486 [2024-07-21 12:06:56.328168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:24:57.486 [2024-07-21 12:06:56.328176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.539 ms 00:24:57.486 [2024-07-21 12:06:56.328183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.486 [2024-07-21 12:06:56.330105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.486 [2024-07-21 12:06:56.330125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:24:57.486 [2024-07-21 12:06:56.330135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.910 ms 00:24:57.486 [2024-07-21 12:06:56.330143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.486 [2024-07-21 12:06:56.330194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.486 [2024-07-21 12:06:56.330207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:24:57.486 [2024-07-21 12:06:56.330223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:24:57.486 [2024-07-21 12:06:56.330230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.486 [2024-07-21 12:06:56.330333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.486 [2024-07-21 12:06:56.330342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:24:57.486 [2024-07-21 12:06:56.330349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:24:57.486 [2024-07-21 12:06:56.330356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.486 [2024-07-21 12:06:56.330375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.486 [2024-07-21 12:06:56.330382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:24:57.486 [2024-07-21 12:06:56.330391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:24:57.486 [2024-07-21 12:06:56.330399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.486 [2024-07-21 12:06:56.330427] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:24:57.486 [2024-07-21 12:06:56.330436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.486 [2024-07-21 12:06:56.330455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:24:57.486 [2024-07-21 12:06:56.330463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:24:57.486 [2024-07-21 12:06:56.330469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.486 [2024-07-21 12:06:56.330514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:24:57.486 [2024-07-21 12:06:56.330521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:24:57.486 [2024-07-21 12:06:56.330528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:24:57.486 [2024-07-21 12:06:56.330539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:24:57.486 [2024-07-21 12:06:56.331527] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1295.883 ms, result 0 00:24:57.486 [2024-07-21 12:06:56.343903] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:57.745 [2024-07-21 12:06:56.359859] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:24:57.745 [2024-07-21 12:06:56.367945] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # return 0 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:24:58.311 Validate MD5 checksum, iteration 1 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:24:58.311 12:06:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:24:58.311 [2024-07-21 12:06:57.003717] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:24:58.311 [2024-07-21 12:06:57.003952] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94966 ] 00:24:58.311 [2024-07-21 12:06:57.161875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.569 [2024-07-21 12:06:57.209145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.153  Copying: 703/1024 [MB] (703 MBps) Copying: 1024/1024 [MB] (average 689 MBps) 00:25:01.153 00:25:01.153 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:25:01.153 12:06:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:03.057 12:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:03.057 12:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=23f30feb0c50df5cfdf6a16bcc28ae11 00:25:03.057 12:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 23f30feb0c50df5cfdf6a16bcc28ae11 != \2\3\f\3\0\f\e\b\0\c\5\0\d\f\5\c\f\d\f\6\a\1\6\b\c\c\2\8\a\e\1\1 ]] 00:25:03.057 12:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:03.057 12:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:03.057 12:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:25:03.057 Validate MD5 checksum, iteration 2 00:25:03.057 12:07:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:03.057 12:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:25:03.057 12:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:25:03.057 12:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:25:03.057 12:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:25:03.057 12:07:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:25:03.057 [2024-07-21 12:07:01.665410] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:03.057 [2024-07-21 12:07:01.665636] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95022 ] 00:25:03.057 [2024-07-21 12:07:01.823949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:03.057 [2024-07-21 12:07:01.875444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.569  Copying: 671/1024 [MB] (671 MBps) Copying: 1024/1024 [MB] (average 659 MBps) 00:25:05.569 00:25:05.829 12:07:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:25:05.829 12:07:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=34eca62b611c1693b8f41bfa0a51cb6c 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 34eca62b611c1693b8f41bfa0a51cb6c != \3\4\e\c\a\6\2\b\6\1\1\c\1\6\9\3\b\8\f\4\1\b\f\a\0\a\5\1\c\b\6\c ]] 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 94937 ]] 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 94937 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@946 -- # '[' -z 94937 ']' 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # kill -0 94937 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # uname 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 94937 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # echo 'killing process with pid 94937' 00:25:07.737 killing process with pid 94937 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@965 -- # kill 94937 00:25:07.737 12:07:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # wait 94937 00:25:07.737 [2024-07-21 12:07:06.459045] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:25:07.737 [2024-07-21 12:07:06.463219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.737 [2024-07-21 12:07:06.463302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:25:07.737 [2024-07-21 12:07:06.463335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:25:07.737 [2024-07-21 12:07:06.463356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.737 [2024-07-21 12:07:06.463393] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:25:07.737 [2024-07-21 12:07:06.464061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.737 [2024-07-21 12:07:06.464096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:25:07.737 [2024-07-21 12:07:06.464125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.636 ms 00:25:07.737 [2024-07-21 12:07:06.464144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.737 [2024-07-21 12:07:06.464368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.737 [2024-07-21 12:07:06.464401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:25:07.737 [2024-07-21 12:07:06.464429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.181 ms 00:25:07.737 [2024-07-21 12:07:06.464460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.737 [2024-07-21 12:07:06.465538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.737 [2024-07-21 12:07:06.465601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:25:07.737 [2024-07-21 12:07:06.465634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.029 ms 00:25:07.737 [2024-07-21 12:07:06.465654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.737 [2024-07-21 12:07:06.466660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.737 [2024-07-21 12:07:06.466712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:25:07.737 [2024-07-21 12:07:06.466740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.953 ms 00:25:07.737 [2024-07-21 12:07:06.466777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.737 [2024-07-21 12:07:06.467870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.737 [2024-07-21 12:07:06.467939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:25:07.737 [2024-07-21 12:07:06.467971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.009 ms 00:25:07.737 [2024-07-21 12:07:06.467992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.737 [2024-07-21 12:07:06.469233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.737 [2024-07-21 12:07:06.469308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:25:07.737 [2024-07-21 12:07:06.469358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.193 ms 00:25:07.737 [2024-07-21 12:07:06.469387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.737 [2024-07-21 12:07:06.469498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.737 [2024-07-21 12:07:06.469548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:25:07.737 [2024-07-21 12:07:06.469580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:25:07.737 [2024-07-21 12:07:06.469602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.737 [2024-07-21 12:07:06.470697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.737 [2024-07-21 12:07:06.470763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:25:07.737 [2024-07-21 12:07:06.470799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.044 ms 00:25:07.737 [2024-07-21 12:07:06.470837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.737 [2024-07-21 12:07:06.471991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.737 [2024-07-21 12:07:06.472015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:25:07.737 [2024-07-21 12:07:06.472025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.067 ms 00:25:07.737 [2024-07-21 12:07:06.472032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.737 [2024-07-21 12:07:06.472956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.737 [2024-07-21 12:07:06.472976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:25:07.737 [2024-07-21 12:07:06.472985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.898 ms 00:25:07.737 [2024-07-21 12:07:06.472993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.737 [2024-07-21 12:07:06.473966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.737 [2024-07-21 12:07:06.473983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:25:07.737 [2024-07-21 12:07:06.473991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.916 ms 00:25:07.737 [2024-07-21 12:07:06.473997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.737 [2024-07-21 12:07:06.474020] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:25:07.737 [2024-07-21 12:07:06.474046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:07.737 [2024-07-21 12:07:06.474059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:25:07.737 [2024-07-21 12:07:06.474067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:25:07.737 [2024-07-21 12:07:06.474075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:07.737 [2024-07-21 12:07:06.474082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:07.737 [2024-07-21 12:07:06.474089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:07.737 [2024-07-21 12:07:06.474096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:07.737 [2024-07-21 12:07:06.474103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:07.737 [2024-07-21 12:07:06.474111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:07.737 [2024-07-21 12:07:06.474120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:07.737 [2024-07-21 12:07:06.474127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:07.737 [2024-07-21 12:07:06.474134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:07.738 [2024-07-21 12:07:06.474141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:07.738 [2024-07-21 12:07:06.474148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:07.738 [2024-07-21 12:07:06.474155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:07.738 [2024-07-21 12:07:06.474163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:07.738 [2024-07-21 12:07:06.474170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:07.738 [2024-07-21 12:07:06.474177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:07.738 [2024-07-21 12:07:06.474186] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:25:07.738 [2024-07-21 12:07:06.474193] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 19130d7f-9734-4b81-99ba-8321fb54dc37 00:25:07.738 [2024-07-21 12:07:06.474202] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:25:07.738 [2024-07-21 12:07:06.474211] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:25:07.738 [2024-07-21 12:07:06.474217] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:25:07.738 [2024-07-21 12:07:06.474241] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:25:07.738 [2024-07-21 12:07:06.474248] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:25:07.738 [2024-07-21 12:07:06.474256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:25:07.738 [2024-07-21 12:07:06.474262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:25:07.738 [2024-07-21 12:07:06.474269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:25:07.738 [2024-07-21 12:07:06.474275] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:25:07.738 [2024-07-21 12:07:06.474282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.738 [2024-07-21 12:07:06.474290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:25:07.738 [2024-07-21 12:07:06.474298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.263 ms 00:25:07.738 [2024-07-21 12:07:06.474305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.475991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.738 [2024-07-21 12:07:06.476016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:25:07.738 [2024-07-21 12:07:06.476025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.671 ms 00:25:07.738 [2024-07-21 12:07:06.476032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.476134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:25:07.738 [2024-07-21 12:07:06.476142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:25:07.738 [2024-07-21 12:07:06.476150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.083 ms 00:25:07.738 [2024-07-21 12:07:06.476158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.482705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:07.738 [2024-07-21 12:07:06.482765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:25:07.738 [2024-07-21 12:07:06.482794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:07.738 [2024-07-21 12:07:06.482814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.482879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:07.738 [2024-07-21 12:07:06.482900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:25:07.738 [2024-07-21 12:07:06.482920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:07.738 [2024-07-21 12:07:06.482939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.483031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:07.738 [2024-07-21 12:07:06.483093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:25:07.738 [2024-07-21 12:07:06.483129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:07.738 [2024-07-21 12:07:06.483151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.483204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:07.738 [2024-07-21 12:07:06.483235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:25:07.738 [2024-07-21 12:07:06.483269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:07.738 [2024-07-21 12:07:06.483310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.496979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:07.738 [2024-07-21 12:07:06.497094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:25:07.738 [2024-07-21 12:07:06.497127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:07.738 [2024-07-21 12:07:06.497149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.505392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:07.738 [2024-07-21 12:07:06.505484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:25:07.738 [2024-07-21 12:07:06.505529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:07.738 [2024-07-21 12:07:06.505563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.505652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:07.738 [2024-07-21 12:07:06.505676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:25:07.738 [2024-07-21 12:07:06.505695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:07.738 [2024-07-21 12:07:06.505714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.505758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:07.738 [2024-07-21 12:07:06.505779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:25:07.738 [2024-07-21 12:07:06.505844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:07.738 [2024-07-21 12:07:06.505873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.505970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:07.738 [2024-07-21 12:07:06.506014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:25:07.738 [2024-07-21 12:07:06.506042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:07.738 [2024-07-21 12:07:06.506063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.506119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:07.738 [2024-07-21 12:07:06.506152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:25:07.738 [2024-07-21 12:07:06.506178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:07.738 [2024-07-21 12:07:06.506206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.506268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:07.738 [2024-07-21 12:07:06.506302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:25:07.738 [2024-07-21 12:07:06.506334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:07.738 [2024-07-21 12:07:06.506361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.506419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:25:07.738 [2024-07-21 12:07:06.506455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:25:07.738 [2024-07-21 12:07:06.506482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:25:07.738 [2024-07-21 12:07:06.506511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:25:07.738 [2024-07-21 12:07:06.506660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 43.494 ms, result 0 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:14.374 Remove shared memory files 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid94728 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:25:14.374 00:25:14.374 real 1m15.379s 00:25:14.374 user 1m37.504s 00:25:14.374 sys 0m19.641s 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:14.374 12:07:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:14.374 ************************************ 00:25:14.374 END TEST ftl_upgrade_shutdown 00:25:14.374 ************************************ 00:25:14.374 12:07:12 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:25:14.374 12:07:12 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:25:14.374 12:07:12 ftl -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:25:14.374 12:07:12 ftl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:14.374 12:07:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:14.374 ************************************ 00:25:14.374 START TEST ftl_restore_fast 00:25:14.374 ************************************ 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1121 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:25:14.374 * Looking for test storage... 00:25:14.374 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.8pS4aj2rPJ 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=95208 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 95208 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- common/autotest_common.sh@827 -- # '[' -z 95208 ']' 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- common/autotest_common.sh@832 -- # local max_retries=100 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # xtrace_disable 00:25:14.374 12:07:12 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:25:14.374 [2024-07-21 12:07:13.011908] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:14.374 [2024-07-21 12:07:13.012029] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95208 ] 00:25:14.374 [2024-07-21 12:07:13.164955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.374 [2024-07-21 12:07:13.209690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.940 12:07:13 ftl.ftl_restore_fast -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:25:14.940 12:07:13 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # return 0 00:25:14.940 12:07:13 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:14.940 12:07:13 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:25:14.940 12:07:13 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:14.940 12:07:13 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:25:14.940 12:07:13 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:25:14.940 12:07:13 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:15.199 12:07:14 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:15.199 12:07:14 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:25:15.200 12:07:14 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:15.200 12:07:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=nvme0n1 00:25:15.200 12:07:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:15.200 12:07:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:15.200 12:07:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:15.200 12:07:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:15.458 12:07:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:15.458 { 00:25:15.458 "name": "nvme0n1", 00:25:15.458 "aliases": [ 00:25:15.458 "f3243d07-a8b8-4459-aaea-5d484cc17a2e" 00:25:15.458 ], 00:25:15.458 "product_name": "NVMe disk", 00:25:15.458 "block_size": 4096, 00:25:15.458 "num_blocks": 1310720, 00:25:15.458 "uuid": "f3243d07-a8b8-4459-aaea-5d484cc17a2e", 00:25:15.458 "assigned_rate_limits": { 00:25:15.458 "rw_ios_per_sec": 0, 00:25:15.458 "rw_mbytes_per_sec": 0, 00:25:15.458 "r_mbytes_per_sec": 0, 00:25:15.458 "w_mbytes_per_sec": 0 00:25:15.458 }, 00:25:15.458 "claimed": true, 00:25:15.458 "claim_type": "read_many_write_one", 00:25:15.458 "zoned": false, 00:25:15.458 "supported_io_types": { 00:25:15.458 "read": true, 00:25:15.458 "write": true, 00:25:15.458 "unmap": true, 00:25:15.458 "write_zeroes": true, 00:25:15.458 "flush": true, 00:25:15.458 "reset": true, 00:25:15.458 "compare": true, 00:25:15.458 "compare_and_write": false, 00:25:15.458 "abort": true, 00:25:15.458 "nvme_admin": true, 00:25:15.458 "nvme_io": true 00:25:15.458 }, 00:25:15.458 "driver_specific": { 00:25:15.458 "nvme": [ 00:25:15.458 { 00:25:15.458 "pci_address": "0000:00:11.0", 00:25:15.458 "trid": { 00:25:15.458 "trtype": "PCIe", 00:25:15.459 "traddr": "0000:00:11.0" 00:25:15.459 }, 00:25:15.459 "ctrlr_data": { 00:25:15.459 "cntlid": 0, 00:25:15.459 "vendor_id": "0x1b36", 00:25:15.459 "model_number": "QEMU NVMe Ctrl", 00:25:15.459 "serial_number": "12341", 00:25:15.459 "firmware_revision": "8.0.0", 00:25:15.459 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:15.459 "oacs": { 00:25:15.459 "security": 0, 00:25:15.459 "format": 1, 00:25:15.459 "firmware": 0, 00:25:15.459 "ns_manage": 1 00:25:15.459 }, 00:25:15.459 "multi_ctrlr": false, 00:25:15.459 "ana_reporting": false 00:25:15.459 }, 00:25:15.459 "vs": { 00:25:15.459 "nvme_version": "1.4" 00:25:15.459 }, 00:25:15.459 "ns_data": { 00:25:15.459 "id": 1, 00:25:15.459 "can_share": false 00:25:15.459 } 00:25:15.459 } 00:25:15.459 ], 00:25:15.459 "mp_policy": "active_passive" 00:25:15.459 } 00:25:15.459 } 00:25:15.459 ]' 00:25:15.459 12:07:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:15.459 12:07:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:15.459 12:07:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:15.459 12:07:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=1310720 00:25:15.459 12:07:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=5120 00:25:15.459 12:07:14 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 5120 00:25:15.459 12:07:14 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:25:15.459 12:07:14 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:15.459 12:07:14 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:25:15.459 12:07:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:15.459 12:07:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:15.717 12:07:14 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=d4ebdc6e-ffc8-44d6-88c7-662d5d9c1deb 00:25:15.717 12:07:14 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:25:15.717 12:07:14 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d4ebdc6e-ffc8-44d6-88c7-662d5d9c1deb 00:25:15.976 12:07:14 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:16.235 12:07:14 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=42e53dec-25b3-4819-b71c-91640438d734 00:25:16.235 12:07:14 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 42e53dec-25b3-4819-b71c-91640438d734 00:25:16.235 12:07:15 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=a36996be-784e-4010-b0d7-6643fddefa5c 00:25:16.235 12:07:15 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:25:16.235 12:07:15 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a36996be-784e-4010-b0d7-6643fddefa5c 00:25:16.235 12:07:15 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:25:16.235 12:07:15 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:16.235 12:07:15 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=a36996be-784e-4010-b0d7-6643fddefa5c 00:25:16.235 12:07:15 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size a36996be-784e-4010-b0d7-6643fddefa5c 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=a36996be-784e-4010-b0d7-6643fddefa5c 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a36996be-784e-4010-b0d7-6643fddefa5c 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:16.495 { 00:25:16.495 "name": "a36996be-784e-4010-b0d7-6643fddefa5c", 00:25:16.495 "aliases": [ 00:25:16.495 "lvs/nvme0n1p0" 00:25:16.495 ], 00:25:16.495 "product_name": "Logical Volume", 00:25:16.495 "block_size": 4096, 00:25:16.495 "num_blocks": 26476544, 00:25:16.495 "uuid": "a36996be-784e-4010-b0d7-6643fddefa5c", 00:25:16.495 "assigned_rate_limits": { 00:25:16.495 "rw_ios_per_sec": 0, 00:25:16.495 "rw_mbytes_per_sec": 0, 00:25:16.495 "r_mbytes_per_sec": 0, 00:25:16.495 "w_mbytes_per_sec": 0 00:25:16.495 }, 00:25:16.495 "claimed": false, 00:25:16.495 "zoned": false, 00:25:16.495 "supported_io_types": { 00:25:16.495 "read": true, 00:25:16.495 "write": true, 00:25:16.495 "unmap": true, 00:25:16.495 "write_zeroes": true, 00:25:16.495 "flush": false, 00:25:16.495 "reset": true, 00:25:16.495 "compare": false, 00:25:16.495 "compare_and_write": false, 00:25:16.495 "abort": false, 00:25:16.495 "nvme_admin": false, 00:25:16.495 "nvme_io": false 00:25:16.495 }, 00:25:16.495 "driver_specific": { 00:25:16.495 "lvol": { 00:25:16.495 "lvol_store_uuid": "42e53dec-25b3-4819-b71c-91640438d734", 00:25:16.495 "base_bdev": "nvme0n1", 00:25:16.495 "thin_provision": true, 00:25:16.495 "num_allocated_clusters": 0, 00:25:16.495 "snapshot": false, 00:25:16.495 "clone": false, 00:25:16.495 "esnap_clone": false 00:25:16.495 } 00:25:16.495 } 00:25:16.495 } 00:25:16.495 ]' 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:25:16.495 12:07:15 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:16.755 12:07:15 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:16.755 12:07:15 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:16.755 12:07:15 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size a36996be-784e-4010-b0d7-6643fddefa5c 00:25:16.755 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=a36996be-784e-4010-b0d7-6643fddefa5c 00:25:16.755 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:16.755 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:16.755 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:16.755 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a36996be-784e-4010-b0d7-6643fddefa5c 00:25:17.015 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:17.015 { 00:25:17.015 "name": "a36996be-784e-4010-b0d7-6643fddefa5c", 00:25:17.015 "aliases": [ 00:25:17.015 "lvs/nvme0n1p0" 00:25:17.015 ], 00:25:17.015 "product_name": "Logical Volume", 00:25:17.015 "block_size": 4096, 00:25:17.015 "num_blocks": 26476544, 00:25:17.015 "uuid": "a36996be-784e-4010-b0d7-6643fddefa5c", 00:25:17.015 "assigned_rate_limits": { 00:25:17.015 "rw_ios_per_sec": 0, 00:25:17.015 "rw_mbytes_per_sec": 0, 00:25:17.015 "r_mbytes_per_sec": 0, 00:25:17.015 "w_mbytes_per_sec": 0 00:25:17.015 }, 00:25:17.015 "claimed": false, 00:25:17.015 "zoned": false, 00:25:17.015 "supported_io_types": { 00:25:17.015 "read": true, 00:25:17.015 "write": true, 00:25:17.015 "unmap": true, 00:25:17.015 "write_zeroes": true, 00:25:17.015 "flush": false, 00:25:17.015 "reset": true, 00:25:17.015 "compare": false, 00:25:17.015 "compare_and_write": false, 00:25:17.015 "abort": false, 00:25:17.015 "nvme_admin": false, 00:25:17.015 "nvme_io": false 00:25:17.015 }, 00:25:17.015 "driver_specific": { 00:25:17.015 "lvol": { 00:25:17.015 "lvol_store_uuid": "42e53dec-25b3-4819-b71c-91640438d734", 00:25:17.015 "base_bdev": "nvme0n1", 00:25:17.015 "thin_provision": true, 00:25:17.015 "num_allocated_clusters": 0, 00:25:17.015 "snapshot": false, 00:25:17.015 "clone": false, 00:25:17.015 "esnap_clone": false 00:25:17.015 } 00:25:17.015 } 00:25:17.015 } 00:25:17.015 ]' 00:25:17.015 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:17.015 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:17.015 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:17.015 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:25:17.015 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:25:17.015 12:07:15 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:25:17.015 12:07:15 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:25:17.015 12:07:15 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:17.274 12:07:16 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:25:17.274 12:07:16 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size a36996be-784e-4010-b0d7-6643fddefa5c 00:25:17.274 12:07:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1374 -- # local bdev_name=a36996be-784e-4010-b0d7-6643fddefa5c 00:25:17.274 12:07:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1375 -- # local bdev_info 00:25:17.274 12:07:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1376 -- # local bs 00:25:17.274 12:07:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1377 -- # local nb 00:25:17.274 12:07:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a36996be-784e-4010-b0d7-6643fddefa5c 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:25:17.533 { 00:25:17.533 "name": "a36996be-784e-4010-b0d7-6643fddefa5c", 00:25:17.533 "aliases": [ 00:25:17.533 "lvs/nvme0n1p0" 00:25:17.533 ], 00:25:17.533 "product_name": "Logical Volume", 00:25:17.533 "block_size": 4096, 00:25:17.533 "num_blocks": 26476544, 00:25:17.533 "uuid": "a36996be-784e-4010-b0d7-6643fddefa5c", 00:25:17.533 "assigned_rate_limits": { 00:25:17.533 "rw_ios_per_sec": 0, 00:25:17.533 "rw_mbytes_per_sec": 0, 00:25:17.533 "r_mbytes_per_sec": 0, 00:25:17.533 "w_mbytes_per_sec": 0 00:25:17.533 }, 00:25:17.533 "claimed": false, 00:25:17.533 "zoned": false, 00:25:17.533 "supported_io_types": { 00:25:17.533 "read": true, 00:25:17.533 "write": true, 00:25:17.533 "unmap": true, 00:25:17.533 "write_zeroes": true, 00:25:17.533 "flush": false, 00:25:17.533 "reset": true, 00:25:17.533 "compare": false, 00:25:17.533 "compare_and_write": false, 00:25:17.533 "abort": false, 00:25:17.533 "nvme_admin": false, 00:25:17.533 "nvme_io": false 00:25:17.533 }, 00:25:17.533 "driver_specific": { 00:25:17.533 "lvol": { 00:25:17.533 "lvol_store_uuid": "42e53dec-25b3-4819-b71c-91640438d734", 00:25:17.533 "base_bdev": "nvme0n1", 00:25:17.533 "thin_provision": true, 00:25:17.533 "num_allocated_clusters": 0, 00:25:17.533 "snapshot": false, 00:25:17.533 "clone": false, 00:25:17.533 "esnap_clone": false 00:25:17.533 } 00:25:17.533 } 00:25:17.533 } 00:25:17.533 ]' 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # bs=4096 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # nb=26476544 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bdev_size=103424 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # echo 103424 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a36996be-784e-4010-b0d7-6643fddefa5c --l2p_dram_limit 10' 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:25:17.533 12:07:16 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a36996be-784e-4010-b0d7-6643fddefa5c --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:25:17.794 [2024-07-21 12:07:16.459536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.794 [2024-07-21 12:07:16.459593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:17.794 [2024-07-21 12:07:16.459609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:17.794 [2024-07-21 12:07:16.459617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.794 [2024-07-21 12:07:16.459680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.794 [2024-07-21 12:07:16.459689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:17.794 [2024-07-21 12:07:16.459700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:17.794 [2024-07-21 12:07:16.459709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.794 [2024-07-21 12:07:16.459732] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:17.794 [2024-07-21 12:07:16.460061] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:17.794 [2024-07-21 12:07:16.460088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.794 [2024-07-21 12:07:16.460096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:17.794 [2024-07-21 12:07:16.460107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:25:17.794 [2024-07-21 12:07:16.460114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.794 [2024-07-21 12:07:16.460145] mngt/ftl_mngt_md.c: 568:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c09ef44a-1311-49f8-9084-4c9a34d361fe 00:25:17.794 [2024-07-21 12:07:16.461536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.794 [2024-07-21 12:07:16.461571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:17.794 [2024-07-21 12:07:16.461587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:25:17.794 [2024-07-21 12:07:16.461595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.794 [2024-07-21 12:07:16.469020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.794 [2024-07-21 12:07:16.469057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:17.794 [2024-07-21 12:07:16.469067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.369 ms 00:25:17.794 [2024-07-21 12:07:16.469076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.794 [2024-07-21 12:07:16.469154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.794 [2024-07-21 12:07:16.469173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:17.794 [2024-07-21 12:07:16.469181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:25:17.794 [2024-07-21 12:07:16.469190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.794 [2024-07-21 12:07:16.469255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.794 [2024-07-21 12:07:16.469266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:17.794 [2024-07-21 12:07:16.469275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:17.794 [2024-07-21 12:07:16.469284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.794 [2024-07-21 12:07:16.469322] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:17.794 [2024-07-21 12:07:16.471029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.794 [2024-07-21 12:07:16.471073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:17.794 [2024-07-21 12:07:16.471084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.722 ms 00:25:17.794 [2024-07-21 12:07:16.471091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.794 [2024-07-21 12:07:16.471127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.794 [2024-07-21 12:07:16.471141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:17.794 [2024-07-21 12:07:16.471151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:17.794 [2024-07-21 12:07:16.471158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.794 [2024-07-21 12:07:16.471186] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:17.794 [2024-07-21 12:07:16.471330] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:17.794 [2024-07-21 12:07:16.471343] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:17.794 [2024-07-21 12:07:16.471353] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:17.794 [2024-07-21 12:07:16.471364] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:17.794 [2024-07-21 12:07:16.471372] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:17.794 [2024-07-21 12:07:16.471381] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:17.794 [2024-07-21 12:07:16.471388] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:17.794 [2024-07-21 12:07:16.471398] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:17.794 [2024-07-21 12:07:16.471405] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:17.794 [2024-07-21 12:07:16.471414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.794 [2024-07-21 12:07:16.471429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:17.794 [2024-07-21 12:07:16.471448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:25:17.794 [2024-07-21 12:07:16.471455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.794 [2024-07-21 12:07:16.471523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.794 [2024-07-21 12:07:16.471531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:17.794 [2024-07-21 12:07:16.471542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:17.794 [2024-07-21 12:07:16.471556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.794 [2024-07-21 12:07:16.471648] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:17.794 [2024-07-21 12:07:16.471659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:17.794 [2024-07-21 12:07:16.471668] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:17.794 [2024-07-21 12:07:16.471676] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.794 [2024-07-21 12:07:16.471685] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:17.794 [2024-07-21 12:07:16.471691] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:17.794 [2024-07-21 12:07:16.471699] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:17.794 [2024-07-21 12:07:16.471706] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:17.794 [2024-07-21 12:07:16.471714] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:17.794 [2024-07-21 12:07:16.471720] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:17.794 [2024-07-21 12:07:16.471728] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:17.794 [2024-07-21 12:07:16.471735] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:17.794 [2024-07-21 12:07:16.471743] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:17.794 [2024-07-21 12:07:16.471749] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:17.794 [2024-07-21 12:07:16.471759] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:17.794 [2024-07-21 12:07:16.471765] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.794 [2024-07-21 12:07:16.471773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:17.794 [2024-07-21 12:07:16.471779] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:17.794 [2024-07-21 12:07:16.471787] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.794 [2024-07-21 12:07:16.471793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:17.794 [2024-07-21 12:07:16.471801] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:17.794 [2024-07-21 12:07:16.471806] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.794 [2024-07-21 12:07:16.471814] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:17.794 [2024-07-21 12:07:16.471837] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:17.794 [2024-07-21 12:07:16.471845] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.794 [2024-07-21 12:07:16.471852] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:17.794 [2024-07-21 12:07:16.471877] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:17.794 [2024-07-21 12:07:16.471884] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.794 [2024-07-21 12:07:16.471892] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:17.794 [2024-07-21 12:07:16.471898] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:17.794 [2024-07-21 12:07:16.471909] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.794 [2024-07-21 12:07:16.471916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:17.794 [2024-07-21 12:07:16.471924] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:17.794 [2024-07-21 12:07:16.471931] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:17.794 [2024-07-21 12:07:16.471939] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:17.794 [2024-07-21 12:07:16.471946] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:17.794 [2024-07-21 12:07:16.471954] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:17.794 [2024-07-21 12:07:16.471960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:17.794 [2024-07-21 12:07:16.471969] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:17.794 [2024-07-21 12:07:16.471976] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.794 [2024-07-21 12:07:16.471983] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:17.794 [2024-07-21 12:07:16.471991] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:17.794 [2024-07-21 12:07:16.471999] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.794 [2024-07-21 12:07:16.472005] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:17.794 [2024-07-21 12:07:16.472014] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:17.794 [2024-07-21 12:07:16.472021] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:17.794 [2024-07-21 12:07:16.472032] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.794 [2024-07-21 12:07:16.472044] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:17.794 [2024-07-21 12:07:16.472053] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:17.794 [2024-07-21 12:07:16.472060] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:17.794 [2024-07-21 12:07:16.472070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:17.794 [2024-07-21 12:07:16.472087] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:17.794 [2024-07-21 12:07:16.472097] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:17.794 [2024-07-21 12:07:16.472107] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:17.794 [2024-07-21 12:07:16.472119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:17.794 [2024-07-21 12:07:16.472130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:17.794 [2024-07-21 12:07:16.472140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:17.794 [2024-07-21 12:07:16.472148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:17.794 [2024-07-21 12:07:16.472157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:17.794 [2024-07-21 12:07:16.472164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:17.794 [2024-07-21 12:07:16.472173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:17.795 [2024-07-21 12:07:16.472179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:17.795 [2024-07-21 12:07:16.472190] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:17.795 [2024-07-21 12:07:16.472198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:17.795 [2024-07-21 12:07:16.472209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:17.795 [2024-07-21 12:07:16.472216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:17.795 [2024-07-21 12:07:16.472224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:17.795 [2024-07-21 12:07:16.472231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:17.795 [2024-07-21 12:07:16.472240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:17.795 [2024-07-21 12:07:16.472247] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:17.795 [2024-07-21 12:07:16.472256] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:17.795 [2024-07-21 12:07:16.472274] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:17.795 [2024-07-21 12:07:16.472283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:17.795 [2024-07-21 12:07:16.472306] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:17.795 [2024-07-21 12:07:16.472316] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:17.795 [2024-07-21 12:07:16.472324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.795 [2024-07-21 12:07:16.472333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:17.795 [2024-07-21 12:07:16.472340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:25:17.795 [2024-07-21 12:07:16.472352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.795 [2024-07-21 12:07:16.472395] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:17.795 [2024-07-21 12:07:16.472408] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:21.089 [2024-07-21 12:07:19.696110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.696175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:21.089 [2024-07-21 12:07:19.696197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3229.928 ms 00:25:21.089 [2024-07-21 12:07:19.696207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.706993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.707042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:21.089 [2024-07-21 12:07:19.707054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.717 ms 00:25:21.089 [2024-07-21 12:07:19.707064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.707191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.707209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:21.089 [2024-07-21 12:07:19.707217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:21.089 [2024-07-21 12:07:19.707225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.716509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.716553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:21.089 [2024-07-21 12:07:19.716564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.270 ms 00:25:21.089 [2024-07-21 12:07:19.716573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.716618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.716630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:21.089 [2024-07-21 12:07:19.716638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:21.089 [2024-07-21 12:07:19.716646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.717107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.717121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:21.089 [2024-07-21 12:07:19.717129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:25:21.089 [2024-07-21 12:07:19.717138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.717220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.717234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:21.089 [2024-07-21 12:07:19.717242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:25:21.089 [2024-07-21 12:07:19.717253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.724080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.724121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:21.089 [2024-07-21 12:07:19.724131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.810 ms 00:25:21.089 [2024-07-21 12:07:19.724157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.731141] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:21.089 [2024-07-21 12:07:19.734231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.734255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:21.089 [2024-07-21 12:07:19.734267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.026 ms 00:25:21.089 [2024-07-21 12:07:19.734274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.823395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.823470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:21.089 [2024-07-21 12:07:19.823498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.253 ms 00:25:21.089 [2024-07-21 12:07:19.823507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.823677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.823686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:21.089 [2024-07-21 12:07:19.823697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:25:21.089 [2024-07-21 12:07:19.823705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.827073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.827106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:21.089 [2024-07-21 12:07:19.827118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.341 ms 00:25:21.089 [2024-07-21 12:07:19.827143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.829749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.829779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:21.089 [2024-07-21 12:07:19.829792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.584 ms 00:25:21.089 [2024-07-21 12:07:19.829799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.830081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.830103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:21.089 [2024-07-21 12:07:19.830115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:25:21.089 [2024-07-21 12:07:19.830123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.870319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.870372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:21.089 [2024-07-21 12:07:19.870388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.244 ms 00:25:21.089 [2024-07-21 12:07:19.870399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.874912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.874948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:21.089 [2024-07-21 12:07:19.874961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.482 ms 00:25:21.089 [2024-07-21 12:07:19.874985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.878359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.878388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:21.089 [2024-07-21 12:07:19.878400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.333 ms 00:25:21.089 [2024-07-21 12:07:19.878407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.881882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.881914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:21.089 [2024-07-21 12:07:19.881926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.447 ms 00:25:21.089 [2024-07-21 12:07:19.881932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.881972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.881981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:21.089 [2024-07-21 12:07:19.881990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:21.089 [2024-07-21 12:07:19.881997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.882062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.089 [2024-07-21 12:07:19.882070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:21.089 [2024-07-21 12:07:19.882089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:25:21.089 [2024-07-21 12:07:19.882096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.089 [2024-07-21 12:07:19.883164] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3429.749 ms, result 0 00:25:21.089 { 00:25:21.089 "name": "ftl0", 00:25:21.089 "uuid": "c09ef44a-1311-49f8-9084-4c9a34d361fe" 00:25:21.089 } 00:25:21.089 12:07:19 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:25:21.089 12:07:19 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:21.349 12:07:20 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:25:21.349 12:07:20 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:25:21.611 [2024-07-21 12:07:20.264882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.611 [2024-07-21 12:07:20.265042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:21.611 [2024-07-21 12:07:20.265078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:21.611 [2024-07-21 12:07:20.265108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.611 [2024-07-21 12:07:20.265149] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:21.611 [2024-07-21 12:07:20.265862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.611 [2024-07-21 12:07:20.265907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:21.611 [2024-07-21 12:07:20.265943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.675 ms 00:25:21.611 [2024-07-21 12:07:20.265971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.611 [2024-07-21 12:07:20.266205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.611 [2024-07-21 12:07:20.266241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:21.611 [2024-07-21 12:07:20.266254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:25:21.611 [2024-07-21 12:07:20.266261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.611 [2024-07-21 12:07:20.268720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.611 [2024-07-21 12:07:20.268745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:21.611 [2024-07-21 12:07:20.268755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.424 ms 00:25:21.611 [2024-07-21 12:07:20.268762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.611 [2024-07-21 12:07:20.273602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.611 [2024-07-21 12:07:20.273631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:21.611 [2024-07-21 12:07:20.273642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.814 ms 00:25:21.611 [2024-07-21 12:07:20.273648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.611 [2024-07-21 12:07:20.275491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.611 [2024-07-21 12:07:20.275528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:21.611 [2024-07-21 12:07:20.275544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.755 ms 00:25:21.611 [2024-07-21 12:07:20.275550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.611 [2024-07-21 12:07:20.280014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.611 [2024-07-21 12:07:20.280049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:21.611 [2024-07-21 12:07:20.280061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.437 ms 00:25:21.611 [2024-07-21 12:07:20.280069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.611 [2024-07-21 12:07:20.280177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.611 [2024-07-21 12:07:20.280195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:21.611 [2024-07-21 12:07:20.280206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:25:21.611 [2024-07-21 12:07:20.280215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.611 [2024-07-21 12:07:20.282170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.611 [2024-07-21 12:07:20.282202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:21.611 [2024-07-21 12:07:20.282212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.937 ms 00:25:21.611 [2024-07-21 12:07:20.282219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.611 [2024-07-21 12:07:20.283729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.611 [2024-07-21 12:07:20.283761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:21.611 [2024-07-21 12:07:20.283775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.480 ms 00:25:21.611 [2024-07-21 12:07:20.283782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.612 [2024-07-21 12:07:20.285011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.612 [2024-07-21 12:07:20.285041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:21.612 [2024-07-21 12:07:20.285051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.197 ms 00:25:21.612 [2024-07-21 12:07:20.285058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.612 [2024-07-21 12:07:20.286157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.612 [2024-07-21 12:07:20.286185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:21.612 [2024-07-21 12:07:20.286196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.046 ms 00:25:21.612 [2024-07-21 12:07:20.286203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.612 [2024-07-21 12:07:20.286231] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:21.612 [2024-07-21 12:07:20.286245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.286998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.287008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:21.612 [2024-07-21 12:07:20.287015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:21.613 [2024-07-21 12:07:20.287035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:21.613 [2024-07-21 12:07:20.287042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:21.613 [2024-07-21 12:07:20.287051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:21.613 [2024-07-21 12:07:20.287058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:21.613 [2024-07-21 12:07:20.287067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:21.613 [2024-07-21 12:07:20.287074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:21.613 [2024-07-21 12:07:20.287082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:21.613 [2024-07-21 12:07:20.287089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:21.613 [2024-07-21 12:07:20.287099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:21.613 [2024-07-21 12:07:20.287106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:21.613 [2024-07-21 12:07:20.287114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:21.613 [2024-07-21 12:07:20.287121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:21.613 [2024-07-21 12:07:20.287130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:21.613 [2024-07-21 12:07:20.287145] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:21.613 [2024-07-21 12:07:20.287156] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c09ef44a-1311-49f8-9084-4c9a34d361fe 00:25:21.613 [2024-07-21 12:07:20.287164] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:21.613 [2024-07-21 12:07:20.287173] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:25:21.613 [2024-07-21 12:07:20.287180] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:21.613 [2024-07-21 12:07:20.287189] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:21.613 [2024-07-21 12:07:20.287196] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:21.613 [2024-07-21 12:07:20.287205] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:21.613 [2024-07-21 12:07:20.287214] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:21.613 [2024-07-21 12:07:20.287221] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:21.613 [2024-07-21 12:07:20.287227] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:21.613 [2024-07-21 12:07:20.287236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.613 [2024-07-21 12:07:20.287243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:21.613 [2024-07-21 12:07:20.287254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:25:21.613 [2024-07-21 12:07:20.287260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.288986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.613 [2024-07-21 12:07:20.289004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:21.613 [2024-07-21 12:07:20.289016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.692 ms 00:25:21.613 [2024-07-21 12:07:20.289023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.289130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:21.613 [2024-07-21 12:07:20.289138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:21.613 [2024-07-21 12:07:20.289147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:25:21.613 [2024-07-21 12:07:20.289154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.295391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.613 [2024-07-21 12:07:20.295416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:21.613 [2024-07-21 12:07:20.295427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.613 [2024-07-21 12:07:20.295437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.295483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.613 [2024-07-21 12:07:20.295490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:21.613 [2024-07-21 12:07:20.295499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.613 [2024-07-21 12:07:20.295506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.295581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.613 [2024-07-21 12:07:20.295591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:21.613 [2024-07-21 12:07:20.295603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.613 [2024-07-21 12:07:20.295610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.295631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.613 [2024-07-21 12:07:20.295638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:21.613 [2024-07-21 12:07:20.295648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.613 [2024-07-21 12:07:20.295655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.309148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.613 [2024-07-21 12:07:20.309196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:21.613 [2024-07-21 12:07:20.309208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.613 [2024-07-21 12:07:20.309223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.317213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.613 [2024-07-21 12:07:20.317250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:21.613 [2024-07-21 12:07:20.317261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.613 [2024-07-21 12:07:20.317285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.317356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.613 [2024-07-21 12:07:20.317366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:21.613 [2024-07-21 12:07:20.317377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.613 [2024-07-21 12:07:20.317385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.317418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.613 [2024-07-21 12:07:20.317429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:21.613 [2024-07-21 12:07:20.317439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.613 [2024-07-21 12:07:20.317446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.317522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.613 [2024-07-21 12:07:20.317533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:21.613 [2024-07-21 12:07:20.317542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.613 [2024-07-21 12:07:20.317549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.317585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.613 [2024-07-21 12:07:20.317595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:21.613 [2024-07-21 12:07:20.317613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.613 [2024-07-21 12:07:20.317621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.317661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.613 [2024-07-21 12:07:20.317676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:21.613 [2024-07-21 12:07:20.317687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.613 [2024-07-21 12:07:20.317694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.317738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:21.613 [2024-07-21 12:07:20.317749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:21.613 [2024-07-21 12:07:20.317758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:21.613 [2024-07-21 12:07:20.317764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:21.613 [2024-07-21 12:07:20.317947] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 53.076 ms, result 0 00:25:21.613 true 00:25:21.613 12:07:20 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 95208 00:25:21.613 12:07:20 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 95208 ']' 00:25:21.613 12:07:20 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 95208 00:25:21.613 12:07:20 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # uname 00:25:21.613 12:07:20 ftl.ftl_restore_fast -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:21.613 12:07:20 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 95208 00:25:21.613 12:07:20 ftl.ftl_restore_fast -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:21.613 12:07:20 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:21.613 12:07:20 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # echo 'killing process with pid 95208' 00:25:21.613 killing process with pid 95208 00:25:21.613 12:07:20 ftl.ftl_restore_fast -- common/autotest_common.sh@965 -- # kill 95208 00:25:21.613 12:07:20 ftl.ftl_restore_fast -- common/autotest_common.sh@970 -- # wait 95208 00:25:31.589 12:07:29 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:25:34.121 262144+0 records in 00:25:34.121 262144+0 records out 00:25:34.121 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.39992 s, 316 MB/s 00:25:34.121 12:07:32 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:36.030 12:07:34 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:36.030 [2024-07-21 12:07:34.545723] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:25:36.030 [2024-07-21 12:07:34.546531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95483 ] 00:25:36.030 [2024-07-21 12:07:34.736297] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:36.030 [2024-07-21 12:07:34.789625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.289 [2024-07-21 12:07:34.899971] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.289 [2024-07-21 12:07:34.900053] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:36.289 [2024-07-21 12:07:35.050031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.289 [2024-07-21 12:07:35.050089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:36.289 [2024-07-21 12:07:35.050102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:36.289 [2024-07-21 12:07:35.050110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.289 [2024-07-21 12:07:35.050177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.289 [2024-07-21 12:07:35.050190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:36.289 [2024-07-21 12:07:35.050198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:25:36.289 [2024-07-21 12:07:35.050208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.289 [2024-07-21 12:07:35.050240] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:36.289 [2024-07-21 12:07:35.050473] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:36.289 [2024-07-21 12:07:35.050499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.289 [2024-07-21 12:07:35.050511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:36.289 [2024-07-21 12:07:35.050527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:25:36.289 [2024-07-21 12:07:35.050536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.289 [2024-07-21 12:07:35.051973] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:36.289 [2024-07-21 12:07:35.054245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.289 [2024-07-21 12:07:35.054285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:36.289 [2024-07-21 12:07:35.054302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.278 ms 00:25:36.290 [2024-07-21 12:07:35.054310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.290 [2024-07-21 12:07:35.054367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.290 [2024-07-21 12:07:35.054378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:36.290 [2024-07-21 12:07:35.054396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:25:36.290 [2024-07-21 12:07:35.054413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.290 [2024-07-21 12:07:35.061151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.290 [2024-07-21 12:07:35.061193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:36.290 [2024-07-21 12:07:35.061213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.681 ms 00:25:36.290 [2024-07-21 12:07:35.061221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.290 [2024-07-21 12:07:35.061328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.290 [2024-07-21 12:07:35.061340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:36.290 [2024-07-21 12:07:35.061349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:36.290 [2024-07-21 12:07:35.061357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.290 [2024-07-21 12:07:35.061419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.290 [2024-07-21 12:07:35.061434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:36.290 [2024-07-21 12:07:35.061445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:36.290 [2024-07-21 12:07:35.061453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.290 [2024-07-21 12:07:35.061490] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:36.290 [2024-07-21 12:07:35.063182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.290 [2024-07-21 12:07:35.063241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:36.290 [2024-07-21 12:07:35.063270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.715 ms 00:25:36.290 [2024-07-21 12:07:35.063329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.290 [2024-07-21 12:07:35.063537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.290 [2024-07-21 12:07:35.063584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:36.290 [2024-07-21 12:07:35.063634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:36.290 [2024-07-21 12:07:35.063657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.290 [2024-07-21 12:07:35.063707] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:36.290 [2024-07-21 12:07:35.063753] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:36.290 [2024-07-21 12:07:35.063837] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:36.290 [2024-07-21 12:07:35.063892] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:25:36.290 [2024-07-21 12:07:35.064038] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:36.290 [2024-07-21 12:07:35.064084] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:36.290 [2024-07-21 12:07:35.064125] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:25:36.290 [2024-07-21 12:07:35.064210] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:36.290 [2024-07-21 12:07:35.064249] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:36.290 [2024-07-21 12:07:35.064294] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:36.290 [2024-07-21 12:07:35.064341] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:36.290 [2024-07-21 12:07:35.064350] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:36.290 [2024-07-21 12:07:35.064368] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:36.290 [2024-07-21 12:07:35.064376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.290 [2024-07-21 12:07:35.064384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:36.290 [2024-07-21 12:07:35.064392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:25:36.290 [2024-07-21 12:07:35.064402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.290 [2024-07-21 12:07:35.064477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.290 [2024-07-21 12:07:35.064485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:36.290 [2024-07-21 12:07:35.064493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:36.290 [2024-07-21 12:07:35.064499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.290 [2024-07-21 12:07:35.064592] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:36.290 [2024-07-21 12:07:35.064602] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:36.290 [2024-07-21 12:07:35.064610] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.290 [2024-07-21 12:07:35.064616] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.290 [2024-07-21 12:07:35.064626] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:36.290 [2024-07-21 12:07:35.064633] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:36.290 [2024-07-21 12:07:35.064640] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:36.290 [2024-07-21 12:07:35.064647] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:36.290 [2024-07-21 12:07:35.064653] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:36.290 [2024-07-21 12:07:35.064659] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.290 [2024-07-21 12:07:35.064666] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:36.290 [2024-07-21 12:07:35.064672] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:36.290 [2024-07-21 12:07:35.064688] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:36.290 [2024-07-21 12:07:35.064695] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:36.290 [2024-07-21 12:07:35.064702] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:36.290 [2024-07-21 12:07:35.064708] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.290 [2024-07-21 12:07:35.064719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:36.290 [2024-07-21 12:07:35.064728] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:36.290 [2024-07-21 12:07:35.064735] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.290 [2024-07-21 12:07:35.064742] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:36.290 [2024-07-21 12:07:35.064749] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:36.290 [2024-07-21 12:07:35.064756] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.290 [2024-07-21 12:07:35.064763] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:36.290 [2024-07-21 12:07:35.064769] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:36.290 [2024-07-21 12:07:35.064776] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.290 [2024-07-21 12:07:35.064782] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:36.290 [2024-07-21 12:07:35.064788] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:36.290 [2024-07-21 12:07:35.064795] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.290 [2024-07-21 12:07:35.064802] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:36.290 [2024-07-21 12:07:35.064808] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:36.290 [2024-07-21 12:07:35.064814] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:36.290 [2024-07-21 12:07:35.064838] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:36.290 [2024-07-21 12:07:35.064849] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:36.290 [2024-07-21 12:07:35.064855] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.290 [2024-07-21 12:07:35.064862] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:36.290 [2024-07-21 12:07:35.064869] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:36.290 [2024-07-21 12:07:35.064891] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:36.290 [2024-07-21 12:07:35.064898] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:36.290 [2024-07-21 12:07:35.064906] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:36.290 [2024-07-21 12:07:35.064913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.290 [2024-07-21 12:07:35.064920] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:36.290 [2024-07-21 12:07:35.064928] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:36.290 [2024-07-21 12:07:35.064935] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.290 [2024-07-21 12:07:35.064954] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:36.290 [2024-07-21 12:07:35.064970] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:36.290 [2024-07-21 12:07:35.064978] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:36.290 [2024-07-21 12:07:35.064987] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:36.290 [2024-07-21 12:07:35.064995] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:36.290 [2024-07-21 12:07:35.065005] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:36.290 [2024-07-21 12:07:35.065013] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:36.290 [2024-07-21 12:07:35.065021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:36.290 [2024-07-21 12:07:35.065028] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:36.290 [2024-07-21 12:07:35.065035] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:36.290 [2024-07-21 12:07:35.065044] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:36.290 [2024-07-21 12:07:35.065055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.290 [2024-07-21 12:07:35.065064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:36.290 [2024-07-21 12:07:35.065072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:36.291 [2024-07-21 12:07:35.065080] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:36.291 [2024-07-21 12:07:35.065088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:36.291 [2024-07-21 12:07:35.065096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:36.291 [2024-07-21 12:07:35.065121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:36.291 [2024-07-21 12:07:35.065128] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:36.291 [2024-07-21 12:07:35.065137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:36.291 [2024-07-21 12:07:35.065145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:36.291 [2024-07-21 12:07:35.065156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:36.291 [2024-07-21 12:07:35.065163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:36.291 [2024-07-21 12:07:35.065171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:36.291 [2024-07-21 12:07:35.065178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:36.291 [2024-07-21 12:07:35.065186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:36.291 [2024-07-21 12:07:35.065193] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:36.291 [2024-07-21 12:07:35.065203] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:36.291 [2024-07-21 12:07:35.065211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:36.291 [2024-07-21 12:07:35.065219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:36.291 [2024-07-21 12:07:35.065226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:36.291 [2024-07-21 12:07:35.065234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:36.291 [2024-07-21 12:07:35.065244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.291 [2024-07-21 12:07:35.065253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:36.291 [2024-07-21 12:07:35.065264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.709 ms 00:25:36.291 [2024-07-21 12:07:35.065279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.291 [2024-07-21 12:07:35.086378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.291 [2024-07-21 12:07:35.086487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:36.291 [2024-07-21 12:07:35.086536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.067 ms 00:25:36.291 [2024-07-21 12:07:35.086568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.291 [2024-07-21 12:07:35.086710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.291 [2024-07-21 12:07:35.086761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:36.291 [2024-07-21 12:07:35.086804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:25:36.291 [2024-07-21 12:07:35.086866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.291 [2024-07-21 12:07:35.097038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.291 [2024-07-21 12:07:35.097126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:36.291 [2024-07-21 12:07:35.097160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.074 ms 00:25:36.291 [2024-07-21 12:07:35.097185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.291 [2024-07-21 12:07:35.097244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.291 [2024-07-21 12:07:35.097288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:36.291 [2024-07-21 12:07:35.097312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:36.291 [2024-07-21 12:07:35.097341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.291 [2024-07-21 12:07:35.097813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.291 [2024-07-21 12:07:35.097871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:36.291 [2024-07-21 12:07:35.097906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:25:36.291 [2024-07-21 12:07:35.097954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.291 [2024-07-21 12:07:35.098095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.291 [2024-07-21 12:07:35.098139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:36.291 [2024-07-21 12:07:35.098170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:25:36.291 [2024-07-21 12:07:35.098193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.291 [2024-07-21 12:07:35.104083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.291 [2024-07-21 12:07:35.104151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:36.291 [2024-07-21 12:07:35.104182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.839 ms 00:25:36.291 [2024-07-21 12:07:35.104202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.291 [2024-07-21 12:07:35.106740] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:25:36.291 [2024-07-21 12:07:35.106827] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:36.291 [2024-07-21 12:07:35.106879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.291 [2024-07-21 12:07:35.106900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:36.291 [2024-07-21 12:07:35.106920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.582 ms 00:25:36.291 [2024-07-21 12:07:35.106939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.291 [2024-07-21 12:07:35.119634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.291 [2024-07-21 12:07:35.119713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:36.291 [2024-07-21 12:07:35.119765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.667 ms 00:25:36.291 [2024-07-21 12:07:35.119786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.291 [2024-07-21 12:07:35.121602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.291 [2024-07-21 12:07:35.121668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:36.291 [2024-07-21 12:07:35.121700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.730 ms 00:25:36.291 [2024-07-21 12:07:35.121721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.291 [2024-07-21 12:07:35.123207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.291 [2024-07-21 12:07:35.123269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:36.291 [2024-07-21 12:07:35.123304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.426 ms 00:25:36.291 [2024-07-21 12:07:35.123324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.291 [2024-07-21 12:07:35.123657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.291 [2024-07-21 12:07:35.123710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:36.291 [2024-07-21 12:07:35.123743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:25:36.291 [2024-07-21 12:07:35.123766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.291 [2024-07-21 12:07:35.143985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.291 [2024-07-21 12:07:35.144139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:36.291 [2024-07-21 12:07:35.144190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.216 ms 00:25:36.291 [2024-07-21 12:07:35.144213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.291 [2024-07-21 12:07:35.150726] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:36.550 [2024-07-21 12:07:35.153828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.550 [2024-07-21 12:07:35.153925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:36.550 [2024-07-21 12:07:35.153958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.557 ms 00:25:36.550 [2024-07-21 12:07:35.153982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.550 [2024-07-21 12:07:35.154077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.550 [2024-07-21 12:07:35.154114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:36.550 [2024-07-21 12:07:35.154157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:36.550 [2024-07-21 12:07:35.154180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.550 [2024-07-21 12:07:35.154272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.550 [2024-07-21 12:07:35.154284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:36.550 [2024-07-21 12:07:35.154301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:36.550 [2024-07-21 12:07:35.154309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.550 [2024-07-21 12:07:35.154332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.550 [2024-07-21 12:07:35.154340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:36.550 [2024-07-21 12:07:35.154348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:36.550 [2024-07-21 12:07:35.154354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.550 [2024-07-21 12:07:35.154411] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:36.550 [2024-07-21 12:07:35.154421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.550 [2024-07-21 12:07:35.154427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:36.550 [2024-07-21 12:07:35.154435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:25:36.550 [2024-07-21 12:07:35.154444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.550 [2024-07-21 12:07:35.158096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.550 [2024-07-21 12:07:35.158129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:36.550 [2024-07-21 12:07:35.158139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.639 ms 00:25:36.550 [2024-07-21 12:07:35.158147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.550 [2024-07-21 12:07:35.158219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:36.550 [2024-07-21 12:07:35.158236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:36.550 [2024-07-21 12:07:35.158244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:36.550 [2024-07-21 12:07:35.158251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:36.550 [2024-07-21 12:07:35.159423] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 109.111 ms, result 0 00:26:12.594  Copying: 29/1024 [MB] (29 MBps) Copying: 58/1024 [MB] (28 MBps) Copying: 87/1024 [MB] (28 MBps) Copying: 115/1024 [MB] (28 MBps) Copying: 143/1024 [MB] (28 MBps) Copying: 172/1024 [MB] (28 MBps) Copying: 201/1024 [MB] (29 MBps) Copying: 230/1024 [MB] (28 MBps) Copying: 258/1024 [MB] (28 MBps) Copying: 287/1024 [MB] (28 MBps) Copying: 316/1024 [MB] (28 MBps) Copying: 345/1024 [MB] (28 MBps) Copying: 373/1024 [MB] (28 MBps) Copying: 402/1024 [MB] (28 MBps) Copying: 434/1024 [MB] (32 MBps) Copying: 463/1024 [MB] (28 MBps) Copying: 491/1024 [MB] (27 MBps) Copying: 518/1024 [MB] (27 MBps) Copying: 547/1024 [MB] (28 MBps) Copying: 576/1024 [MB] (28 MBps) Copying: 605/1024 [MB] (29 MBps) Copying: 634/1024 [MB] (28 MBps) Copying: 662/1024 [MB] (28 MBps) Copying: 691/1024 [MB] (29 MBps) Copying: 720/1024 [MB] (28 MBps) Copying: 749/1024 [MB] (28 MBps) Copying: 777/1024 [MB] (28 MBps) Copying: 805/1024 [MB] (28 MBps) Copying: 833/1024 [MB] (28 MBps) Copying: 859/1024 [MB] (26 MBps) Copying: 885/1024 [MB] (26 MBps) Copying: 912/1024 [MB] (26 MBps) Copying: 938/1024 [MB] (26 MBps) Copying: 964/1024 [MB] (25 MBps) Copying: 991/1024 [MB] (26 MBps) Copying: 1017/1024 [MB] (26 MBps) Copying: 1024/1024 [MB] (average 28 MBps)[2024-07-21 12:08:11.340259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.594 [2024-07-21 12:08:11.340351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:12.594 [2024-07-21 12:08:11.340392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:12.594 [2024-07-21 12:08:11.340421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.594 [2024-07-21 12:08:11.340481] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:12.594 [2024-07-21 12:08:11.341231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.594 [2024-07-21 12:08:11.341248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:12.594 [2024-07-21 12:08:11.341257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:26:12.594 [2024-07-21 12:08:11.341263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.594 [2024-07-21 12:08:11.342995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.594 [2024-07-21 12:08:11.343032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:12.594 [2024-07-21 12:08:11.343042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.711 ms 00:26:12.594 [2024-07-21 12:08:11.343055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.594 [2024-07-21 12:08:11.343091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.594 [2024-07-21 12:08:11.343099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:12.594 [2024-07-21 12:08:11.343107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:12.594 [2024-07-21 12:08:11.343114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.594 [2024-07-21 12:08:11.343162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.594 [2024-07-21 12:08:11.343171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:12.594 [2024-07-21 12:08:11.343178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:26:12.594 [2024-07-21 12:08:11.343185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.594 [2024-07-21 12:08:11.343196] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:12.594 [2024-07-21 12:08:11.343210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:12.594 [2024-07-21 12:08:11.343218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:12.594 [2024-07-21 12:08:11.343226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:12.594 [2024-07-21 12:08:11.343233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:12.594 [2024-07-21 12:08:11.343241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:12.594 [2024-07-21 12:08:11.343248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:12.594 [2024-07-21 12:08:11.343256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:12.594 [2024-07-21 12:08:11.343263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:12.594 [2024-07-21 12:08:11.343270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:12.594 [2024-07-21 12:08:11.343278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:12.594 [2024-07-21 12:08:11.343285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:12.594 [2024-07-21 12:08:11.343292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.343992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:12.595 [2024-07-21 12:08:11.344957] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:12.595 [2024-07-21 12:08:11.344984] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c09ef44a-1311-49f8-9084-4c9a34d361fe 00:26:12.595 [2024-07-21 12:08:11.345016] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:12.595 [2024-07-21 12:08:11.345048] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:26:12.595 [2024-07-21 12:08:11.345074] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:12.595 [2024-07-21 12:08:11.345095] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:12.595 [2024-07-21 12:08:11.345102] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:12.595 [2024-07-21 12:08:11.345109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:12.595 [2024-07-21 12:08:11.345123] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:12.595 [2024-07-21 12:08:11.345129] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:12.595 [2024-07-21 12:08:11.345135] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:12.595 [2024-07-21 12:08:11.345142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.595 [2024-07-21 12:08:11.345148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:12.595 [2024-07-21 12:08:11.345159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.949 ms 00:26:12.595 [2024-07-21 12:08:11.345172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.346798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.595 [2024-07-21 12:08:11.346817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:12.595 [2024-07-21 12:08:11.346848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.612 ms 00:26:12.595 [2024-07-21 12:08:11.346856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.346963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.595 [2024-07-21 12:08:11.346975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:12.595 [2024-07-21 12:08:11.346983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:26:12.595 [2024-07-21 12:08:11.346990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.352427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.595 [2024-07-21 12:08:11.352451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:12.595 [2024-07-21 12:08:11.352461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.595 [2024-07-21 12:08:11.352467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.352530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.595 [2024-07-21 12:08:11.352543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:12.595 [2024-07-21 12:08:11.352549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.595 [2024-07-21 12:08:11.352555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.352584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.595 [2024-07-21 12:08:11.352594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:12.595 [2024-07-21 12:08:11.352604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.595 [2024-07-21 12:08:11.352610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.352623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.595 [2024-07-21 12:08:11.352630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:12.595 [2024-07-21 12:08:11.352639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.595 [2024-07-21 12:08:11.352645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.365819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.595 [2024-07-21 12:08:11.365935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:12.595 [2024-07-21 12:08:11.365979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.595 [2024-07-21 12:08:11.365999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.373971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.595 [2024-07-21 12:08:11.374094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:12.595 [2024-07-21 12:08:11.374141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.595 [2024-07-21 12:08:11.374160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.374225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.595 [2024-07-21 12:08:11.374246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:12.595 [2024-07-21 12:08:11.374265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.595 [2024-07-21 12:08:11.374283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.374354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.595 [2024-07-21 12:08:11.374374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:12.595 [2024-07-21 12:08:11.374400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.595 [2024-07-21 12:08:11.374470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.374555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.595 [2024-07-21 12:08:11.374595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:12.595 [2024-07-21 12:08:11.374625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.595 [2024-07-21 12:08:11.374652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.374700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.595 [2024-07-21 12:08:11.374732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:12.595 [2024-07-21 12:08:11.374760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.595 [2024-07-21 12:08:11.374799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.374877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.595 [2024-07-21 12:08:11.374910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:12.595 [2024-07-21 12:08:11.374938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.595 [2024-07-21 12:08:11.374965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.375019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.595 [2024-07-21 12:08:11.375048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:12.595 [2024-07-21 12:08:11.375076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.595 [2024-07-21 12:08:11.375110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.595 [2024-07-21 12:08:11.375265] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 35.016 ms, result 0 00:26:13.973 00:26:13.973 00:26:13.973 12:08:12 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:26:13.973 [2024-07-21 12:08:12.579098] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:13.973 [2024-07-21 12:08:12.579287] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95864 ] 00:26:13.973 [2024-07-21 12:08:12.740598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.973 [2024-07-21 12:08:12.785014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.234 [2024-07-21 12:08:12.884738] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:14.234 [2024-07-21 12:08:12.884922] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:14.234 [2024-07-21 12:08:13.031360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-07-21 12:08:13.031462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:14.234 [2024-07-21 12:08:13.031507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:14.234 [2024-07-21 12:08:13.031528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-07-21 12:08:13.031604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-07-21 12:08:13.031647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:14.234 [2024-07-21 12:08:13.031667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:14.234 [2024-07-21 12:08:13.031691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-07-21 12:08:13.031761] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:14.234 [2024-07-21 12:08:13.032170] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:14.234 [2024-07-21 12:08:13.032243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-07-21 12:08:13.032278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:14.234 [2024-07-21 12:08:13.032310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.487 ms 00:26:14.234 [2024-07-21 12:08:13.032330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-07-21 12:08:13.032638] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:26:14.234 [2024-07-21 12:08:13.032693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-07-21 12:08:13.032714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:14.234 [2024-07-21 12:08:13.032741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:14.234 [2024-07-21 12:08:13.032784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-07-21 12:08:13.032872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-07-21 12:08:13.032916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:14.234 [2024-07-21 12:08:13.032943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:14.234 [2024-07-21 12:08:13.032969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-07-21 12:08:13.033202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-07-21 12:08:13.033239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:14.234 [2024-07-21 12:08:13.033266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:26:14.234 [2024-07-21 12:08:13.033298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-07-21 12:08:13.033382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-07-21 12:08:13.033420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:14.234 [2024-07-21 12:08:13.033447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:14.234 [2024-07-21 12:08:13.033465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-07-21 12:08:13.033514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-07-21 12:08:13.033542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:14.234 [2024-07-21 12:08:13.033568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:14.234 [2024-07-21 12:08:13.033600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-07-21 12:08:13.033643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:14.234 [2024-07-21 12:08:13.035344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-07-21 12:08:13.035398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:14.234 [2024-07-21 12:08:13.035429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.708 ms 00:26:14.234 [2024-07-21 12:08:13.035448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-07-21 12:08:13.035499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-07-21 12:08:13.035530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:14.234 [2024-07-21 12:08:13.035562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:14.234 [2024-07-21 12:08:13.035588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-07-21 12:08:13.035644] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:14.234 [2024-07-21 12:08:13.035687] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:14.234 [2024-07-21 12:08:13.035751] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:14.234 [2024-07-21 12:08:13.035815] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:14.234 [2024-07-21 12:08:13.035937] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:14.234 [2024-07-21 12:08:13.035977] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:14.234 [2024-07-21 12:08:13.036006] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:14.234 [2024-07-21 12:08:13.036019] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:14.234 [2024-07-21 12:08:13.036028] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:14.234 [2024-07-21 12:08:13.036035] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:14.234 [2024-07-21 12:08:13.036042] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:14.234 [2024-07-21 12:08:13.036048] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:14.234 [2024-07-21 12:08:13.036055] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:14.234 [2024-07-21 12:08:13.036064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-07-21 12:08:13.036071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:14.234 [2024-07-21 12:08:13.036077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:26:14.234 [2024-07-21 12:08:13.036091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-07-21 12:08:13.036162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.234 [2024-07-21 12:08:13.036173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:14.234 [2024-07-21 12:08:13.036180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:14.234 [2024-07-21 12:08:13.036186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.234 [2024-07-21 12:08:13.036274] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:14.234 [2024-07-21 12:08:13.036285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:14.234 [2024-07-21 12:08:13.036292] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:14.234 [2024-07-21 12:08:13.036305] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.234 [2024-07-21 12:08:13.036313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:14.234 [2024-07-21 12:08:13.036319] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:14.234 [2024-07-21 12:08:13.036325] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:14.234 [2024-07-21 12:08:13.036331] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:14.234 [2024-07-21 12:08:13.036340] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:14.234 [2024-07-21 12:08:13.036346] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:14.234 [2024-07-21 12:08:13.036353] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:14.234 [2024-07-21 12:08:13.036360] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:14.234 [2024-07-21 12:08:13.036365] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:14.234 [2024-07-21 12:08:13.036373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:14.234 [2024-07-21 12:08:13.036387] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:14.234 [2024-07-21 12:08:13.036394] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.234 [2024-07-21 12:08:13.036400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:14.234 [2024-07-21 12:08:13.036407] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:14.234 [2024-07-21 12:08:13.036412] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.234 [2024-07-21 12:08:13.036419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:14.234 [2024-07-21 12:08:13.036425] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:14.234 [2024-07-21 12:08:13.036431] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:14.234 [2024-07-21 12:08:13.036438] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:14.234 [2024-07-21 12:08:13.036444] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:14.234 [2024-07-21 12:08:13.036452] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:14.234 [2024-07-21 12:08:13.036459] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:14.234 [2024-07-21 12:08:13.036465] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:14.234 [2024-07-21 12:08:13.036471] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:14.234 [2024-07-21 12:08:13.036477] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:14.235 [2024-07-21 12:08:13.036483] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:14.235 [2024-07-21 12:08:13.036488] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:14.235 [2024-07-21 12:08:13.036494] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:14.235 [2024-07-21 12:08:13.036500] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:14.235 [2024-07-21 12:08:13.036506] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:14.235 [2024-07-21 12:08:13.036512] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:14.235 [2024-07-21 12:08:13.036519] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:14.235 [2024-07-21 12:08:13.036524] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:14.235 [2024-07-21 12:08:13.036530] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:14.235 [2024-07-21 12:08:13.036536] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:14.235 [2024-07-21 12:08:13.036542] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.235 [2024-07-21 12:08:13.036552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:14.235 [2024-07-21 12:08:13.036559] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:14.235 [2024-07-21 12:08:13.036565] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.235 [2024-07-21 12:08:13.036571] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:14.235 [2024-07-21 12:08:13.036584] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:14.235 [2024-07-21 12:08:13.036601] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:14.235 [2024-07-21 12:08:13.036607] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:14.235 [2024-07-21 12:08:13.036614] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:14.235 [2024-07-21 12:08:13.036620] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:14.235 [2024-07-21 12:08:13.036626] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:14.235 [2024-07-21 12:08:13.036632] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:14.235 [2024-07-21 12:08:13.036638] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:14.235 [2024-07-21 12:08:13.036645] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:14.235 [2024-07-21 12:08:13.036652] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:14.235 [2024-07-21 12:08:13.036660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:14.235 [2024-07-21 12:08:13.036668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:14.235 [2024-07-21 12:08:13.036677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:14.235 [2024-07-21 12:08:13.036685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:14.235 [2024-07-21 12:08:13.036691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:14.235 [2024-07-21 12:08:13.036698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:14.235 [2024-07-21 12:08:13.036705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:14.235 [2024-07-21 12:08:13.036711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:14.235 [2024-07-21 12:08:13.036718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:14.235 [2024-07-21 12:08:13.036724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:14.235 [2024-07-21 12:08:13.036730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:14.235 [2024-07-21 12:08:13.036737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:14.235 [2024-07-21 12:08:13.036743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:14.235 [2024-07-21 12:08:13.036749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:14.235 [2024-07-21 12:08:13.036756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:14.235 [2024-07-21 12:08:13.036763] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:14.235 [2024-07-21 12:08:13.036770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:14.235 [2024-07-21 12:08:13.036777] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:14.235 [2024-07-21 12:08:13.036786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:14.235 [2024-07-21 12:08:13.036793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:14.235 [2024-07-21 12:08:13.036799] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:14.235 [2024-07-21 12:08:13.036806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.036831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:14.235 [2024-07-21 12:08:13.036839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:26:14.235 [2024-07-21 12:08:13.036845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.053735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.053809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:14.235 [2024-07-21 12:08:13.053854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.874 ms 00:26:14.235 [2024-07-21 12:08:13.053874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.053962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.053982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:14.235 [2024-07-21 12:08:13.054003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:14.235 [2024-07-21 12:08:13.054040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.063774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.063874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:14.235 [2024-07-21 12:08:13.063909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.674 ms 00:26:14.235 [2024-07-21 12:08:13.063933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.063983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.064014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:14.235 [2024-07-21 12:08:13.064038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:14.235 [2024-07-21 12:08:13.064060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.064170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.064221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:14.235 [2024-07-21 12:08:13.064253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:26:14.235 [2024-07-21 12:08:13.064285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.064418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.064462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:14.235 [2024-07-21 12:08:13.064494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:26:14.235 [2024-07-21 12:08:13.064543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.070420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.070502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:14.235 [2024-07-21 12:08:13.070552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.830 ms 00:26:14.235 [2024-07-21 12:08:13.070572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.070722] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:14.235 [2024-07-21 12:08:13.070799] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:14.235 [2024-07-21 12:08:13.070885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.070915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:14.235 [2024-07-21 12:08:13.070944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:26:14.235 [2024-07-21 12:08:13.070983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.083635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.083696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:14.235 [2024-07-21 12:08:13.083739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.641 ms 00:26:14.235 [2024-07-21 12:08:13.083759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.083882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.083933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:14.235 [2024-07-21 12:08:13.083963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:26:14.235 [2024-07-21 12:08:13.084001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.084072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.084107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:14.235 [2024-07-21 12:08:13.084137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:26:14.235 [2024-07-21 12:08:13.084169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.084438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.084482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:14.235 [2024-07-21 12:08:13.084513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:26:14.235 [2024-07-21 12:08:13.084556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.084606] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:26:14.235 [2024-07-21 12:08:13.084651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.084680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:14.235 [2024-07-21 12:08:13.084701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:26:14.235 [2024-07-21 12:08:13.084730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.092104] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:14.235 [2024-07-21 12:08:13.092278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.092326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:14.235 [2024-07-21 12:08:13.092356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.499 ms 00:26:14.235 [2024-07-21 12:08:13.092390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.094395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.094452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:14.235 [2024-07-21 12:08:13.094492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.971 ms 00:26:14.235 [2024-07-21 12:08:13.094512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.094613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.094651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:14.235 [2024-07-21 12:08:13.094678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:14.235 [2024-07-21 12:08:13.094710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.094767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.094839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:14.235 [2024-07-21 12:08:13.094881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:14.235 [2024-07-21 12:08:13.094908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.235 [2024-07-21 12:08:13.094962] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:14.235 [2024-07-21 12:08:13.094994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.235 [2024-07-21 12:08:13.095021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:14.235 [2024-07-21 12:08:13.095052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:14.235 [2024-07-21 12:08:13.095096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.494 [2024-07-21 12:08:13.099028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.494 [2024-07-21 12:08:13.099100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:14.494 [2024-07-21 12:08:13.099134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.904 ms 00:26:14.494 [2024-07-21 12:08:13.099156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.494 [2024-07-21 12:08:13.099245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:14.494 [2024-07-21 12:08:13.099279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:14.494 [2024-07-21 12:08:13.099315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:14.494 [2024-07-21 12:08:13.099336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.494 [2024-07-21 12:08:13.100352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 68.711 ms, result 0 00:26:49.623  Copying: 29/1024 [MB] (29 MBps) Copying: 57/1024 [MB] (28 MBps) Copying: 86/1024 [MB] (28 MBps) Copying: 115/1024 [MB] (29 MBps) Copying: 144/1024 [MB] (29 MBps) Copying: 175/1024 [MB] (30 MBps) Copying: 206/1024 [MB] (30 MBps) Copying: 237/1024 [MB] (30 MBps) Copying: 268/1024 [MB] (31 MBps) Copying: 299/1024 [MB] (30 MBps) Copying: 330/1024 [MB] (30 MBps) Copying: 359/1024 [MB] (29 MBps) Copying: 388/1024 [MB] (29 MBps) Copying: 418/1024 [MB] (29 MBps) Copying: 447/1024 [MB] (28 MBps) Copying: 476/1024 [MB] (29 MBps) Copying: 505/1024 [MB] (28 MBps) Copying: 534/1024 [MB] (29 MBps) Copying: 563/1024 [MB] (29 MBps) Copying: 593/1024 [MB] (29 MBps) Copying: 622/1024 [MB] (29 MBps) Copying: 651/1024 [MB] (29 MBps) Copying: 680/1024 [MB] (29 MBps) Copying: 709/1024 [MB] (28 MBps) Copying: 737/1024 [MB] (28 MBps) Copying: 766/1024 [MB] (28 MBps) Copying: 795/1024 [MB] (28 MBps) Copying: 823/1024 [MB] (28 MBps) Copying: 851/1024 [MB] (27 MBps) Copying: 878/1024 [MB] (26 MBps) Copying: 905/1024 [MB] (27 MBps) Copying: 933/1024 [MB] (27 MBps) Copying: 961/1024 [MB] (27 MBps) Copying: 989/1024 [MB] (28 MBps) Copying: 1017/1024 [MB] (28 MBps) Copying: 1024/1024 [MB] (average 29 MBps)[2024-07-21 12:08:48.426011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.623 [2024-07-21 12:08:48.426083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:49.623 [2024-07-21 12:08:48.426100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:49.623 [2024-07-21 12:08:48.426109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.623 [2024-07-21 12:08:48.426140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:49.623 [2024-07-21 12:08:48.426965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.623 [2024-07-21 12:08:48.426988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:49.623 [2024-07-21 12:08:48.426998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.812 ms 00:26:49.623 [2024-07-21 12:08:48.427006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.623 [2024-07-21 12:08:48.427274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.623 [2024-07-21 12:08:48.427295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:49.623 [2024-07-21 12:08:48.427315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:26:49.623 [2024-07-21 12:08:48.427324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.623 [2024-07-21 12:08:48.427358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.623 [2024-07-21 12:08:48.427368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:49.623 [2024-07-21 12:08:48.427377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:49.623 [2024-07-21 12:08:48.427385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.623 [2024-07-21 12:08:48.427532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.623 [2024-07-21 12:08:48.427546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:49.623 [2024-07-21 12:08:48.427555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:49.623 [2024-07-21 12:08:48.427563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.623 [2024-07-21 12:08:48.427579] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:49.623 [2024-07-21 12:08:48.427592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:49.623 [2024-07-21 12:08:48.427813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.427997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:49.624 [2024-07-21 12:08:48.428608] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:49.624 [2024-07-21 12:08:48.428616] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c09ef44a-1311-49f8-9084-4c9a34d361fe 00:26:49.624 [2024-07-21 12:08:48.428625] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:49.624 [2024-07-21 12:08:48.428643] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:26:49.624 [2024-07-21 12:08:48.428651] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:49.624 [2024-07-21 12:08:48.428660] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:49.624 [2024-07-21 12:08:48.428667] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:49.625 [2024-07-21 12:08:48.428677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:49.625 [2024-07-21 12:08:48.428686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:49.625 [2024-07-21 12:08:48.428694] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:49.625 [2024-07-21 12:08:48.428701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:49.625 [2024-07-21 12:08:48.428710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.625 [2024-07-21 12:08:48.428719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:49.625 [2024-07-21 12:08:48.428737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.135 ms 00:26:49.625 [2024-07-21 12:08:48.428749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.430806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.625 [2024-07-21 12:08:48.430857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:49.625 [2024-07-21 12:08:48.430869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.042 ms 00:26:49.625 [2024-07-21 12:08:48.430878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.431004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:49.625 [2024-07-21 12:08:48.431016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:49.625 [2024-07-21 12:08:48.431028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:26:49.625 [2024-07-21 12:08:48.431036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.437798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.625 [2024-07-21 12:08:48.437840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:49.625 [2024-07-21 12:08:48.437851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.625 [2024-07-21 12:08:48.437858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.437918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.625 [2024-07-21 12:08:48.437927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:49.625 [2024-07-21 12:08:48.437938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.625 [2024-07-21 12:08:48.437945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.438002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.625 [2024-07-21 12:08:48.438013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:49.625 [2024-07-21 12:08:48.438021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.625 [2024-07-21 12:08:48.438027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.438049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.625 [2024-07-21 12:08:48.438064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:49.625 [2024-07-21 12:08:48.438071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.625 [2024-07-21 12:08:48.438081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.452849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.625 [2024-07-21 12:08:48.452909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:49.625 [2024-07-21 12:08:48.452920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.625 [2024-07-21 12:08:48.452928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.462064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.625 [2024-07-21 12:08:48.462114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:49.625 [2024-07-21 12:08:48.462133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.625 [2024-07-21 12:08:48.462140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.462191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.625 [2024-07-21 12:08:48.462200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:49.625 [2024-07-21 12:08:48.462208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.625 [2024-07-21 12:08:48.462215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.462237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.625 [2024-07-21 12:08:48.462245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:49.625 [2024-07-21 12:08:48.462254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.625 [2024-07-21 12:08:48.462260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.462322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.625 [2024-07-21 12:08:48.462332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:49.625 [2024-07-21 12:08:48.462339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.625 [2024-07-21 12:08:48.462345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.462369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.625 [2024-07-21 12:08:48.462378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:49.625 [2024-07-21 12:08:48.462385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.625 [2024-07-21 12:08:48.462391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.462426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.625 [2024-07-21 12:08:48.462435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:49.625 [2024-07-21 12:08:48.462442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.625 [2024-07-21 12:08:48.462448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.462494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:49.625 [2024-07-21 12:08:48.462517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:49.625 [2024-07-21 12:08:48.462524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:49.625 [2024-07-21 12:08:48.462531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:49.625 [2024-07-21 12:08:48.462640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 36.679 ms, result 0 00:26:49.885 00:26:49.885 00:26:49.885 12:08:48 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:51.871 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:51.871 12:08:50 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:26:51.871 [2024-07-21 12:08:50.433946] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:26:51.871 [2024-07-21 12:08:50.434075] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96249 ] 00:26:51.871 [2024-07-21 12:08:50.597395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.871 [2024-07-21 12:08:50.644000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.131 [2024-07-21 12:08:50.744059] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:52.131 [2024-07-21 12:08:50.744135] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:52.131 [2024-07-21 12:08:50.891263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.131 [2024-07-21 12:08:50.891318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:52.131 [2024-07-21 12:08:50.891333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:52.131 [2024-07-21 12:08:50.891356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.131 [2024-07-21 12:08:50.891409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.131 [2024-07-21 12:08:50.891420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:52.131 [2024-07-21 12:08:50.891428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:52.131 [2024-07-21 12:08:50.891439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.131 [2024-07-21 12:08:50.891457] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:52.131 [2024-07-21 12:08:50.891644] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:52.131 [2024-07-21 12:08:50.891672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.132 [2024-07-21 12:08:50.891682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:52.132 [2024-07-21 12:08:50.891690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:26:52.132 [2024-07-21 12:08:50.891704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.132 [2024-07-21 12:08:50.891948] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:26:52.132 [2024-07-21 12:08:50.891968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.132 [2024-07-21 12:08:50.891985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:52.132 [2024-07-21 12:08:50.891993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:52.132 [2024-07-21 12:08:50.892003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.132 [2024-07-21 12:08:50.892081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.132 [2024-07-21 12:08:50.892091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:52.132 [2024-07-21 12:08:50.892098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:26:52.132 [2024-07-21 12:08:50.892105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.132 [2024-07-21 12:08:50.892313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.132 [2024-07-21 12:08:50.892330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:52.132 [2024-07-21 12:08:50.892337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:26:52.132 [2024-07-21 12:08:50.892347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.132 [2024-07-21 12:08:50.892427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.132 [2024-07-21 12:08:50.892443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:52.132 [2024-07-21 12:08:50.892451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:52.132 [2024-07-21 12:08:50.892457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.132 [2024-07-21 12:08:50.892478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.132 [2024-07-21 12:08:50.892486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:52.132 [2024-07-21 12:08:50.892493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:52.132 [2024-07-21 12:08:50.892504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.132 [2024-07-21 12:08:50.892523] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:52.132 [2024-07-21 12:08:50.894183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.132 [2024-07-21 12:08:50.894202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:52.132 [2024-07-21 12:08:50.894210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.667 ms 00:26:52.132 [2024-07-21 12:08:50.894217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.132 [2024-07-21 12:08:50.894242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.132 [2024-07-21 12:08:50.894250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:52.132 [2024-07-21 12:08:50.894259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:26:52.132 [2024-07-21 12:08:50.894266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.132 [2024-07-21 12:08:50.894283] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:52.132 [2024-07-21 12:08:50.894312] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:52.132 [2024-07-21 12:08:50.894346] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:52.132 [2024-07-21 12:08:50.894360] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:26:52.132 [2024-07-21 12:08:50.894435] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:52.132 [2024-07-21 12:08:50.894451] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:52.132 [2024-07-21 12:08:50.894460] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:26:52.132 [2024-07-21 12:08:50.894472] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:52.132 [2024-07-21 12:08:50.894480] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:52.132 [2024-07-21 12:08:50.894503] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:52.132 [2024-07-21 12:08:50.894510] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:52.132 [2024-07-21 12:08:50.894516] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:52.132 [2024-07-21 12:08:50.894523] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:52.132 [2024-07-21 12:08:50.894530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.132 [2024-07-21 12:08:50.894539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:52.132 [2024-07-21 12:08:50.894552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:26:52.132 [2024-07-21 12:08:50.894558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.132 [2024-07-21 12:08:50.894619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.132 [2024-07-21 12:08:50.894629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:52.132 [2024-07-21 12:08:50.894636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:26:52.132 [2024-07-21 12:08:50.894644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.132 [2024-07-21 12:08:50.894724] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:52.132 [2024-07-21 12:08:50.894736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:52.132 [2024-07-21 12:08:50.894743] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:52.132 [2024-07-21 12:08:50.894750] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.132 [2024-07-21 12:08:50.894757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:52.132 [2024-07-21 12:08:50.894764] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:52.132 [2024-07-21 12:08:50.894770] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:52.132 [2024-07-21 12:08:50.894776] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:52.132 [2024-07-21 12:08:50.894786] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:52.132 [2024-07-21 12:08:50.894792] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:52.132 [2024-07-21 12:08:50.894798] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:52.132 [2024-07-21 12:08:50.894804] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:52.132 [2024-07-21 12:08:50.894810] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:52.132 [2024-07-21 12:08:50.894816] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:52.132 [2024-07-21 12:08:50.894843] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:52.132 [2024-07-21 12:08:50.894850] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.132 [2024-07-21 12:08:50.894856] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:52.132 [2024-07-21 12:08:50.894862] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:52.132 [2024-07-21 12:08:50.894868] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.132 [2024-07-21 12:08:50.894877] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:52.132 [2024-07-21 12:08:50.894884] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:52.132 [2024-07-21 12:08:50.894890] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.132 [2024-07-21 12:08:50.894896] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:52.132 [2024-07-21 12:08:50.894903] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:52.132 [2024-07-21 12:08:50.894912] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.132 [2024-07-21 12:08:50.894918] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:52.132 [2024-07-21 12:08:50.894924] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:52.132 [2024-07-21 12:08:50.894929] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.132 [2024-07-21 12:08:50.894936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:52.132 [2024-07-21 12:08:50.894942] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:52.132 [2024-07-21 12:08:50.894947] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:52.132 [2024-07-21 12:08:50.894953] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:52.132 [2024-07-21 12:08:50.894960] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:52.132 [2024-07-21 12:08:50.894966] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:52.132 [2024-07-21 12:08:50.894972] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:52.132 [2024-07-21 12:08:50.894978] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:52.132 [2024-07-21 12:08:50.894983] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:52.132 [2024-07-21 12:08:50.894989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:52.132 [2024-07-21 12:08:50.894996] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:52.132 [2024-07-21 12:08:50.895003] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.132 [2024-07-21 12:08:50.895013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:52.132 [2024-07-21 12:08:50.895020] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:52.132 [2024-07-21 12:08:50.895026] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.132 [2024-07-21 12:08:50.895031] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:52.132 [2024-07-21 12:08:50.895037] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:52.132 [2024-07-21 12:08:50.895046] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:52.132 [2024-07-21 12:08:50.895052] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:52.132 [2024-07-21 12:08:50.895059] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:52.132 [2024-07-21 12:08:50.895065] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:52.132 [2024-07-21 12:08:50.895071] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:52.132 [2024-07-21 12:08:50.895077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:52.132 [2024-07-21 12:08:50.895083] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:52.132 [2024-07-21 12:08:50.895089] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:52.132 [2024-07-21 12:08:50.895097] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:52.132 [2024-07-21 12:08:50.895105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:52.133 [2024-07-21 12:08:50.895112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:52.133 [2024-07-21 12:08:50.895124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:52.133 [2024-07-21 12:08:50.895131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:52.133 [2024-07-21 12:08:50.895137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:52.133 [2024-07-21 12:08:50.895145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:52.133 [2024-07-21 12:08:50.895151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:52.133 [2024-07-21 12:08:50.895158] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:52.133 [2024-07-21 12:08:50.895164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:52.133 [2024-07-21 12:08:50.895171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:52.133 [2024-07-21 12:08:50.895178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:52.133 [2024-07-21 12:08:50.895185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:52.133 [2024-07-21 12:08:50.895191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:52.133 [2024-07-21 12:08:50.895198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:52.133 [2024-07-21 12:08:50.895205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:52.133 [2024-07-21 12:08:50.895212] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:52.133 [2024-07-21 12:08:50.895219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:52.133 [2024-07-21 12:08:50.895226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:52.133 [2024-07-21 12:08:50.895235] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:52.133 [2024-07-21 12:08:50.895241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:52.133 [2024-07-21 12:08:50.895248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:52.133 [2024-07-21 12:08:50.895254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.895261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:52.133 [2024-07-21 12:08:50.895268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:26:52.133 [2024-07-21 12:08:50.895274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.913412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.913486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:52.133 [2024-07-21 12:08:50.913514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.092 ms 00:26:52.133 [2024-07-21 12:08:50.913556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.913760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.913808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:52.133 [2024-07-21 12:08:50.913852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:26:52.133 [2024-07-21 12:08:50.913870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.928935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.928994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:52.133 [2024-07-21 12:08:50.929014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.980 ms 00:26:52.133 [2024-07-21 12:08:50.929028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.929095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.929111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:52.133 [2024-07-21 12:08:50.929133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:52.133 [2024-07-21 12:08:50.929148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.929288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.929316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:52.133 [2024-07-21 12:08:50.929332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:26:52.133 [2024-07-21 12:08:50.929361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.929545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.929576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:52.133 [2024-07-21 12:08:50.929606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:26:52.133 [2024-07-21 12:08:50.929619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.937172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.937215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:52.133 [2024-07-21 12:08:50.937227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.535 ms 00:26:52.133 [2024-07-21 12:08:50.937237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.937392] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:52.133 [2024-07-21 12:08:50.937415] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:52.133 [2024-07-21 12:08:50.937436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.937446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:52.133 [2024-07-21 12:08:50.937460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:26:52.133 [2024-07-21 12:08:50.937469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.949863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.949893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:52.133 [2024-07-21 12:08:50.949903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.398 ms 00:26:52.133 [2024-07-21 12:08:50.949909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.950020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.950043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:52.133 [2024-07-21 12:08:50.950052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:26:52.133 [2024-07-21 12:08:50.950058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.950106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.950115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:52.133 [2024-07-21 12:08:50.950122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:52.133 [2024-07-21 12:08:50.950129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.950375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.950395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:52.133 [2024-07-21 12:08:50.950403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:26:52.133 [2024-07-21 12:08:50.950418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.950442] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:26:52.133 [2024-07-21 12:08:50.950454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.950462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:52.133 [2024-07-21 12:08:50.950469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:26:52.133 [2024-07-21 12:08:50.950475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.957268] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:52.133 [2024-07-21 12:08:50.957394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.957405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:52.133 [2024-07-21 12:08:50.957413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.915 ms 00:26:52.133 [2024-07-21 12:08:50.957423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.959345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.959373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:52.133 [2024-07-21 12:08:50.959381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.906 ms 00:26:52.133 [2024-07-21 12:08:50.959388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.959470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.959479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:52.133 [2024-07-21 12:08:50.959495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:52.133 [2024-07-21 12:08:50.959506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.133 [2024-07-21 12:08:50.959535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.133 [2024-07-21 12:08:50.959543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:52.133 [2024-07-21 12:08:50.959550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:52.134 [2024-07-21 12:08:50.959557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.134 [2024-07-21 12:08:50.959582] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:52.134 [2024-07-21 12:08:50.959591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.134 [2024-07-21 12:08:50.959604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:52.134 [2024-07-21 12:08:50.959611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:52.134 [2024-07-21 12:08:50.959620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.134 [2024-07-21 12:08:50.963517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.134 [2024-07-21 12:08:50.963548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:52.134 [2024-07-21 12:08:50.963557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.879 ms 00:26:52.134 [2024-07-21 12:08:50.963564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.134 [2024-07-21 12:08:50.963636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:52.134 [2024-07-21 12:08:50.963645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:52.134 [2024-07-21 12:08:50.963652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:26:52.134 [2024-07-21 12:08:50.963659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:52.134 [2024-07-21 12:08:50.964599] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 73.064 ms, result 0 00:27:32.970  Copying: 26/1024 [MB] (26 MBps) Copying: 50/1024 [MB] (24 MBps) Copying: 75/1024 [MB] (25 MBps) Copying: 102/1024 [MB] (26 MBps) Copying: 127/1024 [MB] (25 MBps) Copying: 152/1024 [MB] (25 MBps) Copying: 179/1024 [MB] (26 MBps) Copying: 206/1024 [MB] (27 MBps) Copying: 234/1024 [MB] (27 MBps) Copying: 261/1024 [MB] (27 MBps) Copying: 288/1024 [MB] (26 MBps) Copying: 313/1024 [MB] (24 MBps) Copying: 338/1024 [MB] (25 MBps) Copying: 364/1024 [MB] (25 MBps) Copying: 390/1024 [MB] (26 MBps) Copying: 417/1024 [MB] (26 MBps) Copying: 444/1024 [MB] (27 MBps) Copying: 473/1024 [MB] (29 MBps) Copying: 500/1024 [MB] (26 MBps) Copying: 524/1024 [MB] (24 MBps) Copying: 548/1024 [MB] (24 MBps) Copying: 572/1024 [MB] (24 MBps) Copying: 597/1024 [MB] (24 MBps) Copying: 621/1024 [MB] (24 MBps) Copying: 645/1024 [MB] (24 MBps) Copying: 670/1024 [MB] (24 MBps) Copying: 695/1024 [MB] (24 MBps) Copying: 720/1024 [MB] (24 MBps) Copying: 745/1024 [MB] (24 MBps) Copying: 770/1024 [MB] (25 MBps) Copying: 796/1024 [MB] (25 MBps) Copying: 821/1024 [MB] (25 MBps) Copying: 846/1024 [MB] (25 MBps) Copying: 872/1024 [MB] (25 MBps) Copying: 897/1024 [MB] (25 MBps) Copying: 923/1024 [MB] (25 MBps) Copying: 949/1024 [MB] (26 MBps) Copying: 976/1024 [MB] (26 MBps) Copying: 1002/1024 [MB] (26 MBps) Copying: 1023/1024 [MB] (20 MBps) Copying: 1024/1024 [MB] (average 25 MBps)[2024-07-21 12:09:31.593317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.970 [2024-07-21 12:09:31.593456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:32.970 [2024-07-21 12:09:31.593493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:32.970 [2024-07-21 12:09:31.593514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.970 [2024-07-21 12:09:31.595252] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:32.970 [2024-07-21 12:09:31.597491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.970 [2024-07-21 12:09:31.597574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:32.970 [2024-07-21 12:09:31.597606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.146 ms 00:27:32.970 [2024-07-21 12:09:31.597635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.970 [2024-07-21 12:09:31.607432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.970 [2024-07-21 12:09:31.607507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:32.970 [2024-07-21 12:09:31.607535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.721 ms 00:27:32.970 [2024-07-21 12:09:31.607555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.970 [2024-07-21 12:09:31.607597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.970 [2024-07-21 12:09:31.607618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:32.970 [2024-07-21 12:09:31.607641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:32.970 [2024-07-21 12:09:31.607671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.970 [2024-07-21 12:09:31.607736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.970 [2024-07-21 12:09:31.607761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:32.970 [2024-07-21 12:09:31.607871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:27:32.970 [2024-07-21 12:09:31.607900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.970 [2024-07-21 12:09:31.607925] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:32.970 [2024-07-21 12:09:31.607959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 129024 / 261120 wr_cnt: 1 state: open 00:27:32.970 [2024-07-21 12:09:31.608006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.608997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.609037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.609081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.609126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:32.970 [2024-07-21 12:09:31.609183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.609996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:32.971 [2024-07-21 12:09:31.610372] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:32.971 [2024-07-21 12:09:31.610379] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c09ef44a-1311-49f8-9084-4c9a34d361fe 00:27:32.971 [2024-07-21 12:09:31.610388] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 129024 00:27:32.971 [2024-07-21 12:09:31.610394] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129056 00:27:32.971 [2024-07-21 12:09:31.610401] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 129024 00:27:32.971 [2024-07-21 12:09:31.610409] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0002 00:27:32.971 [2024-07-21 12:09:31.610424] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:32.971 [2024-07-21 12:09:31.610440] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:32.971 [2024-07-21 12:09:31.610447] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:32.971 [2024-07-21 12:09:31.610453] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:32.971 [2024-07-21 12:09:31.610459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:32.971 [2024-07-21 12:09:31.610467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.971 [2024-07-21 12:09:31.610474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:32.971 [2024-07-21 12:09:31.610481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.547 ms 00:27:32.971 [2024-07-21 12:09:31.610488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.971 [2024-07-21 12:09:31.612242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.971 [2024-07-21 12:09:31.612267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:32.971 [2024-07-21 12:09:31.612280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.740 ms 00:27:32.971 [2024-07-21 12:09:31.612287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.971 [2024-07-21 12:09:31.612392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:32.971 [2024-07-21 12:09:31.612402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:32.971 [2024-07-21 12:09:31.612411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:27:32.971 [2024-07-21 12:09:31.612418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.971 [2024-07-21 12:09:31.617740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.971 [2024-07-21 12:09:31.617764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:32.971 [2024-07-21 12:09:31.617772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.971 [2024-07-21 12:09:31.617778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.971 [2024-07-21 12:09:31.617826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.971 [2024-07-21 12:09:31.617835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:32.971 [2024-07-21 12:09:31.617842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.972 [2024-07-21 12:09:31.617849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.972 [2024-07-21 12:09:31.617877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.972 [2024-07-21 12:09:31.617891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:32.972 [2024-07-21 12:09:31.617898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.972 [2024-07-21 12:09:31.617905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.972 [2024-07-21 12:09:31.617919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.972 [2024-07-21 12:09:31.617926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:32.972 [2024-07-21 12:09:31.617933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.972 [2024-07-21 12:09:31.617949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.972 [2024-07-21 12:09:31.630569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.972 [2024-07-21 12:09:31.630606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:32.972 [2024-07-21 12:09:31.630615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.972 [2024-07-21 12:09:31.630622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.972 [2024-07-21 12:09:31.638995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.972 [2024-07-21 12:09:31.639023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:32.972 [2024-07-21 12:09:31.639032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.972 [2024-07-21 12:09:31.639039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.972 [2024-07-21 12:09:31.639086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.972 [2024-07-21 12:09:31.639094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:32.972 [2024-07-21 12:09:31.639106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.972 [2024-07-21 12:09:31.639112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.972 [2024-07-21 12:09:31.639133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.972 [2024-07-21 12:09:31.639140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:32.972 [2024-07-21 12:09:31.639148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.972 [2024-07-21 12:09:31.639154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.972 [2024-07-21 12:09:31.639213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.972 [2024-07-21 12:09:31.639225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:32.972 [2024-07-21 12:09:31.639233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.972 [2024-07-21 12:09:31.639242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.972 [2024-07-21 12:09:31.639265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.972 [2024-07-21 12:09:31.639275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:32.972 [2024-07-21 12:09:31.639282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.972 [2024-07-21 12:09:31.639289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.972 [2024-07-21 12:09:31.639336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.972 [2024-07-21 12:09:31.639346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:32.972 [2024-07-21 12:09:31.639354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.972 [2024-07-21 12:09:31.639364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.972 [2024-07-21 12:09:31.639406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:32.972 [2024-07-21 12:09:31.639425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:32.972 [2024-07-21 12:09:31.639433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:32.972 [2024-07-21 12:09:31.639440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:32.972 [2024-07-21 12:09:31.639547] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 47.317 ms, result 0 00:27:34.348 00:27:34.348 00:27:34.348 12:09:32 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:34.348 [2024-07-21 12:09:32.943292] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:27:34.348 [2024-07-21 12:09:32.943411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96677 ] 00:27:34.348 [2024-07-21 12:09:33.103185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.348 [2024-07-21 12:09:33.148687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:34.607 [2024-07-21 12:09:33.250233] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:34.607 [2024-07-21 12:09:33.250299] bdev.c:8114:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:34.607 [2024-07-21 12:09:33.397277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.607 [2024-07-21 12:09:33.397329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:34.607 [2024-07-21 12:09:33.397342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:34.607 [2024-07-21 12:09:33.397349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.607 [2024-07-21 12:09:33.397407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.607 [2024-07-21 12:09:33.397418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:34.607 [2024-07-21 12:09:33.397426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:34.607 [2024-07-21 12:09:33.397442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.607 [2024-07-21 12:09:33.397460] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:34.607 [2024-07-21 12:09:33.397673] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:34.607 [2024-07-21 12:09:33.397706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.607 [2024-07-21 12:09:33.397717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:34.607 [2024-07-21 12:09:33.397724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:27:34.607 [2024-07-21 12:09:33.397731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.607 [2024-07-21 12:09:33.398020] mngt/ftl_mngt_md.c: 453:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:34.607 [2024-07-21 12:09:33.398055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.607 [2024-07-21 12:09:33.398064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:34.607 [2024-07-21 12:09:33.398072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:27:34.607 [2024-07-21 12:09:33.398083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.607 [2024-07-21 12:09:33.398129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.607 [2024-07-21 12:09:33.398137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:34.607 [2024-07-21 12:09:33.398152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:34.607 [2024-07-21 12:09:33.398179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.607 [2024-07-21 12:09:33.398383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.607 [2024-07-21 12:09:33.398400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:34.607 [2024-07-21 12:09:33.398408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:27:34.607 [2024-07-21 12:09:33.398424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.607 [2024-07-21 12:09:33.398498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.607 [2024-07-21 12:09:33.398510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:34.607 [2024-07-21 12:09:33.398518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:27:34.607 [2024-07-21 12:09:33.398535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.607 [2024-07-21 12:09:33.398558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.607 [2024-07-21 12:09:33.398574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:34.607 [2024-07-21 12:09:33.398581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:34.607 [2024-07-21 12:09:33.398598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.607 [2024-07-21 12:09:33.398619] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:34.607 [2024-07-21 12:09:33.400274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.607 [2024-07-21 12:09:33.400298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:34.607 [2024-07-21 12:09:33.400308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.661 ms 00:27:34.607 [2024-07-21 12:09:33.400326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.607 [2024-07-21 12:09:33.400354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.607 [2024-07-21 12:09:33.400362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:34.607 [2024-07-21 12:09:33.400369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:34.607 [2024-07-21 12:09:33.400376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.607 [2024-07-21 12:09:33.400405] ftl_layout.c: 603:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:34.607 [2024-07-21 12:09:33.400425] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:34.607 [2024-07-21 12:09:33.400460] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:34.607 [2024-07-21 12:09:33.400477] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x168 bytes 00:27:34.607 [2024-07-21 12:09:33.400555] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:34.607 [2024-07-21 12:09:33.400565] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:34.608 [2024-07-21 12:09:33.400575] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x168 bytes 00:27:34.608 [2024-07-21 12:09:33.400593] ftl_layout.c: 675:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:34.608 [2024-07-21 12:09:33.400602] ftl_layout.c: 677:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:34.608 [2024-07-21 12:09:33.400616] ftl_layout.c: 679:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:34.608 [2024-07-21 12:09:33.400623] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:34.608 [2024-07-21 12:09:33.400637] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:34.608 [2024-07-21 12:09:33.400651] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:34.608 [2024-07-21 12:09:33.400658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.608 [2024-07-21 12:09:33.400666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:34.608 [2024-07-21 12:09:33.400681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:27:34.608 [2024-07-21 12:09:33.400692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.608 [2024-07-21 12:09:33.400754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.608 [2024-07-21 12:09:33.400766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:34.608 [2024-07-21 12:09:33.400773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:27:34.608 [2024-07-21 12:09:33.400779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.608 [2024-07-21 12:09:33.400860] ftl_layout.c: 758:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:34.608 [2024-07-21 12:09:33.400878] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:34.608 [2024-07-21 12:09:33.400886] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:34.608 [2024-07-21 12:09:33.400893] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:34.608 [2024-07-21 12:09:33.400900] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:34.608 [2024-07-21 12:09:33.400906] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:34.608 [2024-07-21 12:09:33.400913] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:34.608 [2024-07-21 12:09:33.400919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:34.608 [2024-07-21 12:09:33.400926] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:34.608 [2024-07-21 12:09:33.400932] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:34.608 [2024-07-21 12:09:33.400938] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:34.608 [2024-07-21 12:09:33.400947] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:34.608 [2024-07-21 12:09:33.400953] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:34.608 [2024-07-21 12:09:33.400960] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:34.608 [2024-07-21 12:09:33.400976] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:34.608 [2024-07-21 12:09:33.400982] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:34.608 [2024-07-21 12:09:33.400989] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:34.608 [2024-07-21 12:09:33.400995] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:34.608 [2024-07-21 12:09:33.401001] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:34.608 [2024-07-21 12:09:33.401007] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:34.608 [2024-07-21 12:09:33.401013] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:34.608 [2024-07-21 12:09:33.401019] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:34.608 [2024-07-21 12:09:33.401025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:34.608 [2024-07-21 12:09:33.401031] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:34.608 [2024-07-21 12:09:33.401037] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:34.608 [2024-07-21 12:09:33.401042] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:34.608 [2024-07-21 12:09:33.401049] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:34.608 [2024-07-21 12:09:33.401057] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:34.608 [2024-07-21 12:09:33.401064] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:34.608 [2024-07-21 12:09:33.401070] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:34.608 [2024-07-21 12:09:33.401076] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:34.608 [2024-07-21 12:09:33.401082] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:34.608 [2024-07-21 12:09:33.401089] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:34.608 [2024-07-21 12:09:33.401095] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:34.608 [2024-07-21 12:09:33.401101] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:34.608 [2024-07-21 12:09:33.401107] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:34.608 [2024-07-21 12:09:33.401113] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:34.608 [2024-07-21 12:09:33.401118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:34.608 [2024-07-21 12:09:33.401124] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:34.608 [2024-07-21 12:09:33.401130] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:34.608 [2024-07-21 12:09:33.401136] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:34.608 [2024-07-21 12:09:33.401142] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:34.608 [2024-07-21 12:09:33.401148] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:34.608 [2024-07-21 12:09:33.401155] ftl_layout.c: 765:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:34.608 [2024-07-21 12:09:33.401162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:34.608 [2024-07-21 12:09:33.401171] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:34.608 [2024-07-21 12:09:33.401177] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:34.608 [2024-07-21 12:09:33.401184] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:34.608 [2024-07-21 12:09:33.401191] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:34.608 [2024-07-21 12:09:33.401197] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:34.608 [2024-07-21 12:09:33.401204] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:34.608 [2024-07-21 12:09:33.401210] ftl_layout.c: 119:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:34.608 [2024-07-21 12:09:33.401217] ftl_layout.c: 121:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:34.608 [2024-07-21 12:09:33.401224] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:34.608 [2024-07-21 12:09:33.401232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:34.608 [2024-07-21 12:09:33.401246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:34.608 [2024-07-21 12:09:33.401253] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:34.608 [2024-07-21 12:09:33.401260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:34.608 [2024-07-21 12:09:33.401267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:34.608 [2024-07-21 12:09:33.401277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:34.608 [2024-07-21 12:09:33.401284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:34.608 [2024-07-21 12:09:33.401290] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:34.608 [2024-07-21 12:09:33.401296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:34.608 [2024-07-21 12:09:33.401303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:34.608 [2024-07-21 12:09:33.401309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:34.608 [2024-07-21 12:09:33.401316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:34.608 [2024-07-21 12:09:33.401322] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:34.608 [2024-07-21 12:09:33.401328] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:34.608 [2024-07-21 12:09:33.401335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:34.608 [2024-07-21 12:09:33.401341] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:34.608 [2024-07-21 12:09:33.401348] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:34.608 [2024-07-21 12:09:33.401356] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:34.608 [2024-07-21 12:09:33.401363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:34.608 [2024-07-21 12:09:33.401370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:34.608 [2024-07-21 12:09:33.401376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:34.608 [2024-07-21 12:09:33.401386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.608 [2024-07-21 12:09:33.401392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:34.608 [2024-07-21 12:09:33.401399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:27:34.608 [2024-07-21 12:09:33.401405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.608 [2024-07-21 12:09:33.418708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.608 [2024-07-21 12:09:33.418747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:34.608 [2024-07-21 12:09:33.418778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.285 ms 00:27:34.608 [2024-07-21 12:09:33.418786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.608 [2024-07-21 12:09:33.418880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.608 [2024-07-21 12:09:33.418890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:34.608 [2024-07-21 12:09:33.418912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:27:34.608 [2024-07-21 12:09:33.418926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.608 [2024-07-21 12:09:33.429214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.608 [2024-07-21 12:09:33.429260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:34.608 [2024-07-21 12:09:33.429273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.260 ms 00:27:34.608 [2024-07-21 12:09:33.429282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.608 [2024-07-21 12:09:33.429317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.608 [2024-07-21 12:09:33.429337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:34.609 [2024-07-21 12:09:33.429348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:34.609 [2024-07-21 12:09:33.429357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.429472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.429482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:34.609 [2024-07-21 12:09:33.429489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:27:34.609 [2024-07-21 12:09:33.429496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.429595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.429619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:34.609 [2024-07-21 12:09:33.429627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:27:34.609 [2024-07-21 12:09:33.429640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.435214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.435249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:34.609 [2024-07-21 12:09:33.435267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.565 ms 00:27:34.609 [2024-07-21 12:09:33.435281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.435400] ftl_nv_cache.c:1723:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:34.609 [2024-07-21 12:09:33.435413] ftl_nv_cache.c:1727:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:34.609 [2024-07-21 12:09:33.435422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.435432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:34.609 [2024-07-21 12:09:33.435442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:27:34.609 [2024-07-21 12:09:33.435449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.445058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.445105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:34.609 [2024-07-21 12:09:33.445131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.613 ms 00:27:34.609 [2024-07-21 12:09:33.445138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.445235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.445244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:34.609 [2024-07-21 12:09:33.445263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:27:34.609 [2024-07-21 12:09:33.445271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.445312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.445328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:34.609 [2024-07-21 12:09:33.445338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:27:34.609 [2024-07-21 12:09:33.445345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.445574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.445595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:34.609 [2024-07-21 12:09:33.445605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:27:34.609 [2024-07-21 12:09:33.445611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.445627] mngt/ftl_mngt_p2l.c: 132:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:34.609 [2024-07-21 12:09:33.445639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.445646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:34.609 [2024-07-21 12:09:33.445653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:34.609 [2024-07-21 12:09:33.445660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.452424] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:34.609 [2024-07-21 12:09:33.452576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.452589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:34.609 [2024-07-21 12:09:33.452598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.913 ms 00:27:34.609 [2024-07-21 12:09:33.452609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.454583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.454612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:34.609 [2024-07-21 12:09:33.454627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.958 ms 00:27:34.609 [2024-07-21 12:09:33.454634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.454684] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:27:34.609 [2024-07-21 12:09:33.455186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.455210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:34.609 [2024-07-21 12:09:33.455225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.521 ms 00:27:34.609 [2024-07-21 12:09:33.455234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.455258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.455266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:34.609 [2024-07-21 12:09:33.455273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:34.609 [2024-07-21 12:09:33.455280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.455325] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:34.609 [2024-07-21 12:09:33.455342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.455349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:34.609 [2024-07-21 12:09:33.455357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:34.609 [2024-07-21 12:09:33.455366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.459434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.459469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:34.609 [2024-07-21 12:09:33.459479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.059 ms 00:27:34.609 [2024-07-21 12:09:33.459486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.459553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.609 [2024-07-21 12:09:33.459562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:34.609 [2024-07-21 12:09:33.459570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:27:34.609 [2024-07-21 12:09:33.459581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.609 [2024-07-21 12:09:33.465520] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 66.621 ms, result 0 00:28:11.859  Copying: 29/1024 [MB] (29 MBps) Copying: 57/1024 [MB] (27 MBps) Copying: 84/1024 [MB] (27 MBps) Copying: 112/1024 [MB] (28 MBps) Copying: 141/1024 [MB] (28 MBps) Copying: 169/1024 [MB] (28 MBps) Copying: 197/1024 [MB] (27 MBps) Copying: 224/1024 [MB] (27 MBps) Copying: 252/1024 [MB] (27 MBps) Copying: 280/1024 [MB] (27 MBps) Copying: 308/1024 [MB] (27 MBps) Copying: 336/1024 [MB] (27 MBps) Copying: 363/1024 [MB] (27 MBps) Copying: 391/1024 [MB] (27 MBps) Copying: 419/1024 [MB] (27 MBps) Copying: 447/1024 [MB] (27 MBps) Copying: 475/1024 [MB] (27 MBps) Copying: 503/1024 [MB] (28 MBps) Copying: 531/1024 [MB] (27 MBps) Copying: 558/1024 [MB] (27 MBps) Copying: 586/1024 [MB] (27 MBps) Copying: 613/1024 [MB] (27 MBps) Copying: 640/1024 [MB] (27 MBps) Copying: 668/1024 [MB] (27 MBps) Copying: 695/1024 [MB] (27 MBps) Copying: 723/1024 [MB] (27 MBps) Copying: 751/1024 [MB] (27 MBps) Copying: 778/1024 [MB] (27 MBps) Copying: 806/1024 [MB] (27 MBps) Copying: 834/1024 [MB] (27 MBps) Copying: 862/1024 [MB] (27 MBps) Copying: 888/1024 [MB] (26 MBps) Copying: 916/1024 [MB] (27 MBps) Copying: 943/1024 [MB] (27 MBps) Copying: 971/1024 [MB] (27 MBps) Copying: 999/1024 [MB] (27 MBps) Copying: 1024/1024 [MB] (average 27 MBps)[2024-07-21 12:10:10.554850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.859 [2024-07-21 12:10:10.554943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:11.859 [2024-07-21 12:10:10.554965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:11.859 [2024-07-21 12:10:10.554976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.859 [2024-07-21 12:10:10.555006] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:11.859 [2024-07-21 12:10:10.556628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.859 [2024-07-21 12:10:10.556667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:11.859 [2024-07-21 12:10:10.556680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.608 ms 00:28:11.859 [2024-07-21 12:10:10.556696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.859 [2024-07-21 12:10:10.556983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.859 [2024-07-21 12:10:10.557004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:11.859 [2024-07-21 12:10:10.557016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:28:11.859 [2024-07-21 12:10:10.557026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.859 [2024-07-21 12:10:10.557063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.859 [2024-07-21 12:10:10.557076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:11.859 [2024-07-21 12:10:10.557086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:11.859 [2024-07-21 12:10:10.557096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.859 [2024-07-21 12:10:10.557174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.859 [2024-07-21 12:10:10.557186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:11.859 [2024-07-21 12:10:10.557197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:28:11.859 [2024-07-21 12:10:10.557207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.859 [2024-07-21 12:10:10.557224] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:11.859 [2024-07-21 12:10:10.557238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133632 / 261120 wr_cnt: 1 state: open 00:28:11.859 [2024-07-21 12:10:10.557263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:11.859 [2024-07-21 12:10:10.557814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.557998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:11.860 [2024-07-21 12:10:10.558302] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:11.860 [2024-07-21 12:10:10.558313] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c09ef44a-1311-49f8-9084-4c9a34d361fe 00:28:11.860 [2024-07-21 12:10:10.558323] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133632 00:28:11.860 [2024-07-21 12:10:10.558334] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4640 00:28:11.860 [2024-07-21 12:10:10.558344] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4608 00:28:11.860 [2024-07-21 12:10:10.558358] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0069 00:28:11.860 [2024-07-21 12:10:10.558368] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:11.860 [2024-07-21 12:10:10.558379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:11.860 [2024-07-21 12:10:10.558389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:11.860 [2024-07-21 12:10:10.558398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:11.860 [2024-07-21 12:10:10.558407] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:11.860 [2024-07-21 12:10:10.558417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.860 [2024-07-21 12:10:10.558428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:11.860 [2024-07-21 12:10:10.558438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.196 ms 00:28:11.860 [2024-07-21 12:10:10.558465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.860 [2024-07-21 12:10:10.561812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.860 [2024-07-21 12:10:10.561864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:11.860 [2024-07-21 12:10:10.561877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.331 ms 00:28:11.860 [2024-07-21 12:10:10.561888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.860 [2024-07-21 12:10:10.562084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:11.860 [2024-07-21 12:10:10.562211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:11.860 [2024-07-21 12:10:10.562227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:28:11.860 [2024-07-21 12:10:10.562237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.860 [2024-07-21 12:10:10.571777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.860 [2024-07-21 12:10:10.571824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:11.860 [2024-07-21 12:10:10.571835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.860 [2024-07-21 12:10:10.571851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.860 [2024-07-21 12:10:10.571903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.860 [2024-07-21 12:10:10.571914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:11.860 [2024-07-21 12:10:10.571922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.860 [2024-07-21 12:10:10.571937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.860 [2024-07-21 12:10:10.571998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.860 [2024-07-21 12:10:10.572009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:11.860 [2024-07-21 12:10:10.572016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.860 [2024-07-21 12:10:10.572023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.860 [2024-07-21 12:10:10.572038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.860 [2024-07-21 12:10:10.572053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:11.860 [2024-07-21 12:10:10.572061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.860 [2024-07-21 12:10:10.572068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.860 [2024-07-21 12:10:10.597278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.860 [2024-07-21 12:10:10.597359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:11.860 [2024-07-21 12:10:10.597370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.860 [2024-07-21 12:10:10.597378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.860 [2024-07-21 12:10:10.611940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.860 [2024-07-21 12:10:10.611986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:11.860 [2024-07-21 12:10:10.611998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.860 [2024-07-21 12:10:10.612006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.860 [2024-07-21 12:10:10.612088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.860 [2024-07-21 12:10:10.612098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:11.860 [2024-07-21 12:10:10.612111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.860 [2024-07-21 12:10:10.612119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.860 [2024-07-21 12:10:10.612152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.860 [2024-07-21 12:10:10.612161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:11.860 [2024-07-21 12:10:10.612168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.860 [2024-07-21 12:10:10.612175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.860 [2024-07-21 12:10:10.612245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.860 [2024-07-21 12:10:10.612256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:11.860 [2024-07-21 12:10:10.612268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.860 [2024-07-21 12:10:10.612275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.860 [2024-07-21 12:10:10.612305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.860 [2024-07-21 12:10:10.612316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:11.860 [2024-07-21 12:10:10.612323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.860 [2024-07-21 12:10:10.612332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.860 [2024-07-21 12:10:10.612384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.861 [2024-07-21 12:10:10.612405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:11.861 [2024-07-21 12:10:10.612413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.861 [2024-07-21 12:10:10.612424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.861 [2024-07-21 12:10:10.612472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:11.861 [2024-07-21 12:10:10.612493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:11.861 [2024-07-21 12:10:10.612500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:11.861 [2024-07-21 12:10:10.612509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:11.861 [2024-07-21 12:10:10.612646] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 57.889 ms, result 0 00:28:12.119 00:28:12.119 00:28:12.119 12:10:10 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:14.025 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 95208 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- common/autotest_common.sh@946 -- # '[' -z 95208 ']' 00:28:14.025 Process with pid 95208 is not found 00:28:14.025 Remove shared memory files 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # kill -0 95208 00:28:14.025 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (95208) - No such process 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # echo 'Process with pid 95208 is not found' 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_c09ef44a-1311-49f8-9084-4c9a34d361fe_band_md /dev/hugepages/ftl_c09ef44a-1311-49f8-9084-4c9a34d361fe_l2p_l1 /dev/hugepages/ftl_c09ef44a-1311-49f8-9084-4c9a34d361fe_l2p_l2 /dev/hugepages/ftl_c09ef44a-1311-49f8-9084-4c9a34d361fe_l2p_l2_ctx /dev/hugepages/ftl_c09ef44a-1311-49f8-9084-4c9a34d361fe_nvc_md /dev/hugepages/ftl_c09ef44a-1311-49f8-9084-4c9a34d361fe_p2l_pool /dev/hugepages/ftl_c09ef44a-1311-49f8-9084-4c9a34d361fe_sb /dev/hugepages/ftl_c09ef44a-1311-49f8-9084-4c9a34d361fe_sb_shm /dev/hugepages/ftl_c09ef44a-1311-49f8-9084-4c9a34d361fe_trim_bitmap /dev/hugepages/ftl_c09ef44a-1311-49f8-9084-4c9a34d361fe_trim_log /dev/hugepages/ftl_c09ef44a-1311-49f8-9084-4c9a34d361fe_trim_md /dev/hugepages/ftl_c09ef44a-1311-49f8-9084-4c9a34d361fe_vmap 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:28:14.025 00:28:14.025 real 2m59.926s 00:28:14.025 user 2m50.049s 00:28:14.025 sys 0m11.293s 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:14.025 12:10:12 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:14.025 ************************************ 00:28:14.025 END TEST ftl_restore_fast 00:28:14.025 ************************************ 00:28:14.025 12:10:12 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:14.025 12:10:12 ftl -- ftl/ftl.sh@14 -- # killprocess 88314 00:28:14.025 12:10:12 ftl -- common/autotest_common.sh@946 -- # '[' -z 88314 ']' 00:28:14.025 12:10:12 ftl -- common/autotest_common.sh@950 -- # kill -0 88314 00:28:14.025 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 950: kill: (88314) - No such process 00:28:14.025 Process with pid 88314 is not found 00:28:14.025 12:10:12 ftl -- common/autotest_common.sh@973 -- # echo 'Process with pid 88314 is not found' 00:28:14.025 12:10:12 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:14.025 12:10:12 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=97103 00:28:14.025 12:10:12 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:14.025 12:10:12 ftl -- ftl/ftl.sh@20 -- # waitforlisten 97103 00:28:14.025 12:10:12 ftl -- common/autotest_common.sh@827 -- # '[' -z 97103 ']' 00:28:14.025 12:10:12 ftl -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.025 12:10:12 ftl -- common/autotest_common.sh@832 -- # local max_retries=100 00:28:14.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.025 12:10:12 ftl -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.025 12:10:12 ftl -- common/autotest_common.sh@836 -- # xtrace_disable 00:28:14.025 12:10:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:14.025 [2024-07-21 12:10:12.875859] Starting SPDK v24.05.1-pre git sha1 5fa2f5086 / DPDK 23.11.0 initialization... 00:28:14.026 [2024-07-21 12:10:12.875967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97103 ] 00:28:14.287 [2024-07-21 12:10:13.025357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.287 [2024-07-21 12:10:13.069424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.963 12:10:13 ftl -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:28:14.963 12:10:13 ftl -- common/autotest_common.sh@860 -- # return 0 00:28:14.963 12:10:13 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:15.221 nvme0n1 00:28:15.221 12:10:13 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:15.221 12:10:13 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:15.221 12:10:13 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:15.221 12:10:14 ftl -- ftl/common.sh@28 -- # stores=42e53dec-25b3-4819-b71c-91640438d734 00:28:15.221 12:10:14 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:15.221 12:10:14 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42e53dec-25b3-4819-b71c-91640438d734 00:28:15.480 12:10:14 ftl -- ftl/ftl.sh@23 -- # killprocess 97103 00:28:15.480 12:10:14 ftl -- common/autotest_common.sh@946 -- # '[' -z 97103 ']' 00:28:15.480 12:10:14 ftl -- common/autotest_common.sh@950 -- # kill -0 97103 00:28:15.480 12:10:14 ftl -- common/autotest_common.sh@951 -- # uname 00:28:15.480 12:10:14 ftl -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:28:15.480 12:10:14 ftl -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 97103 00:28:15.480 12:10:14 ftl -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:28:15.480 12:10:14 ftl -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:28:15.480 killing process with pid 97103 00:28:15.480 12:10:14 ftl -- common/autotest_common.sh@964 -- # echo 'killing process with pid 97103' 00:28:15.480 12:10:14 ftl -- common/autotest_common.sh@965 -- # kill 97103 00:28:15.480 12:10:14 ftl -- common/autotest_common.sh@970 -- # wait 97103 00:28:16.048 12:10:14 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:16.307 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:16.307 Waiting for block devices as requested 00:28:16.307 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:16.566 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:16.566 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:16.566 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:21.833 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:21.833 12:10:20 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:21.833 Remove shared memory files 00:28:21.833 12:10:20 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:21.833 12:10:20 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:21.833 12:10:20 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:21.833 12:10:20 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:21.833 12:10:20 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:21.833 12:10:20 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:21.833 00:28:21.833 real 12m37.251s 00:28:21.833 user 14m39.816s 00:28:21.833 sys 1m22.621s 00:28:21.833 12:10:20 ftl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:28:21.833 12:10:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:21.833 ************************************ 00:28:21.833 END TEST ftl 00:28:21.833 ************************************ 00:28:21.833 12:10:20 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:21.833 12:10:20 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:21.833 12:10:20 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:21.833 12:10:20 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:21.833 12:10:20 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:21.833 12:10:20 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:21.833 12:10:20 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:21.833 12:10:20 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:21.833 12:10:20 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:21.833 12:10:20 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:21.833 12:10:20 -- common/autotest_common.sh@720 -- # xtrace_disable 00:28:21.833 12:10:20 -- common/autotest_common.sh@10 -- # set +x 00:28:21.833 12:10:20 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:21.833 12:10:20 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:28:21.833 12:10:20 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:28:21.833 12:10:20 -- common/autotest_common.sh@10 -- # set +x 00:28:23.733 INFO: APP EXITING 00:28:23.733 INFO: killing all VMs 00:28:23.733 INFO: killing vhost app 00:28:23.733 INFO: EXIT DONE 00:28:23.992 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:24.568 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:24.568 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:24.568 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:24.568 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:25.135 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:25.393 Cleaning 00:28:25.393 Removing: /var/run/dpdk/spdk0/config 00:28:25.393 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:25.393 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:25.393 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:25.393 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:25.393 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:25.393 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:25.393 Removing: /var/run/dpdk/spdk0 00:28:25.393 Removing: /var/run/dpdk/spdk_pid74328 00:28:25.393 Removing: /var/run/dpdk/spdk_pid74489 00:28:25.393 Removing: /var/run/dpdk/spdk_pid74677 00:28:25.393 Removing: /var/run/dpdk/spdk_pid74769 00:28:25.393 Removing: /var/run/dpdk/spdk_pid74795 00:28:25.393 Removing: /var/run/dpdk/spdk_pid74907 00:28:25.393 Removing: /var/run/dpdk/spdk_pid74925 00:28:25.652 Removing: /var/run/dpdk/spdk_pid75083 00:28:25.652 Removing: /var/run/dpdk/spdk_pid75149 00:28:25.652 Removing: /var/run/dpdk/spdk_pid75220 00:28:25.652 Removing: /var/run/dpdk/spdk_pid75307 00:28:25.652 Removing: /var/run/dpdk/spdk_pid75385 00:28:25.652 Removing: /var/run/dpdk/spdk_pid75420 00:28:25.652 Removing: /var/run/dpdk/spdk_pid75456 00:28:25.652 Removing: /var/run/dpdk/spdk_pid75520 00:28:25.652 Removing: /var/run/dpdk/spdk_pid75639 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76051 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76098 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76145 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76161 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76229 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76240 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76304 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76320 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76362 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76380 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76422 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76440 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76565 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76601 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76677 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76730 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76750 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76817 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76853 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76888 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76924 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76959 00:28:25.652 Removing: /var/run/dpdk/spdk_pid76995 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77030 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77066 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77096 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77137 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77167 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77208 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77238 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77274 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77309 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77345 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77380 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77419 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77457 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77493 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77529 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77596 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77689 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77834 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77907 00:28:25.652 Removing: /var/run/dpdk/spdk_pid77938 00:28:25.652 Removing: /var/run/dpdk/spdk_pid78356 00:28:25.652 Removing: /var/run/dpdk/spdk_pid78443 00:28:25.652 Removing: /var/run/dpdk/spdk_pid78545 00:28:25.652 Removing: /var/run/dpdk/spdk_pid78583 00:28:25.652 Removing: /var/run/dpdk/spdk_pid78614 00:28:25.652 Removing: /var/run/dpdk/spdk_pid78683 00:28:25.652 Removing: /var/run/dpdk/spdk_pid79299 00:28:25.652 Removing: /var/run/dpdk/spdk_pid79330 00:28:25.652 Removing: /var/run/dpdk/spdk_pid79797 00:28:25.652 Removing: /var/run/dpdk/spdk_pid79886 00:28:25.652 Removing: /var/run/dpdk/spdk_pid79979 00:28:25.911 Removing: /var/run/dpdk/spdk_pid80021 00:28:25.911 Removing: /var/run/dpdk/spdk_pid80052 00:28:25.911 Removing: /var/run/dpdk/spdk_pid80072 00:28:25.911 Removing: /var/run/dpdk/spdk_pid81902 00:28:25.911 Removing: /var/run/dpdk/spdk_pid82022 00:28:25.911 Removing: /var/run/dpdk/spdk_pid82036 00:28:25.911 Removing: /var/run/dpdk/spdk_pid82049 00:28:25.911 Removing: /var/run/dpdk/spdk_pid82123 00:28:25.911 Removing: /var/run/dpdk/spdk_pid82127 00:28:25.911 Removing: /var/run/dpdk/spdk_pid82145 00:28:25.911 Removing: /var/run/dpdk/spdk_pid82229 00:28:25.911 Removing: /var/run/dpdk/spdk_pid82233 00:28:25.911 Removing: /var/run/dpdk/spdk_pid82245 00:28:25.911 Removing: /var/run/dpdk/spdk_pid82329 00:28:25.911 Removing: /var/run/dpdk/spdk_pid82339 00:28:25.911 Removing: /var/run/dpdk/spdk_pid82351 00:28:25.911 Removing: /var/run/dpdk/spdk_pid83751 00:28:25.911 Removing: /var/run/dpdk/spdk_pid83829 00:28:25.911 Removing: /var/run/dpdk/spdk_pid84715 00:28:25.911 Removing: /var/run/dpdk/spdk_pid85062 00:28:25.911 Removing: /var/run/dpdk/spdk_pid85123 00:28:25.911 Removing: /var/run/dpdk/spdk_pid85188 00:28:25.911 Removing: /var/run/dpdk/spdk_pid85242 00:28:25.911 Removing: /var/run/dpdk/spdk_pid85326 00:28:25.911 Removing: /var/run/dpdk/spdk_pid85389 00:28:25.911 Removing: /var/run/dpdk/spdk_pid85524 00:28:25.911 Removing: /var/run/dpdk/spdk_pid85786 00:28:25.911 Removing: /var/run/dpdk/spdk_pid85817 00:28:25.911 Removing: /var/run/dpdk/spdk_pid86240 00:28:25.911 Removing: /var/run/dpdk/spdk_pid86414 00:28:25.911 Removing: /var/run/dpdk/spdk_pid86499 00:28:25.911 Removing: /var/run/dpdk/spdk_pid86602 00:28:25.911 Removing: /var/run/dpdk/spdk_pid86641 00:28:25.911 Removing: /var/run/dpdk/spdk_pid86666 00:28:25.911 Removing: /var/run/dpdk/spdk_pid86971 00:28:25.911 Removing: /var/run/dpdk/spdk_pid86998 00:28:25.911 Removing: /var/run/dpdk/spdk_pid87054 00:28:25.911 Removing: /var/run/dpdk/spdk_pid87392 00:28:25.911 Removing: /var/run/dpdk/spdk_pid87532 00:28:25.911 Removing: /var/run/dpdk/spdk_pid88314 00:28:25.911 Removing: /var/run/dpdk/spdk_pid88423 00:28:25.911 Removing: /var/run/dpdk/spdk_pid88620 00:28:25.911 Removing: /var/run/dpdk/spdk_pid88706 00:28:25.911 Removing: /var/run/dpdk/spdk_pid89022 00:28:25.911 Removing: /var/run/dpdk/spdk_pid89261 00:28:25.911 Removing: /var/run/dpdk/spdk_pid89631 00:28:25.911 Removing: /var/run/dpdk/spdk_pid89846 00:28:25.911 Removing: /var/run/dpdk/spdk_pid89961 00:28:25.911 Removing: /var/run/dpdk/spdk_pid90001 00:28:25.911 Removing: /var/run/dpdk/spdk_pid90116 00:28:25.911 Removing: /var/run/dpdk/spdk_pid90130 00:28:25.911 Removing: /var/run/dpdk/spdk_pid90167 00:28:25.911 Removing: /var/run/dpdk/spdk_pid90342 00:28:25.911 Removing: /var/run/dpdk/spdk_pid90604 00:28:25.911 Removing: /var/run/dpdk/spdk_pid91012 00:28:25.911 Removing: /var/run/dpdk/spdk_pid91400 00:28:25.911 Removing: /var/run/dpdk/spdk_pid91805 00:28:25.911 Removing: /var/run/dpdk/spdk_pid92253 00:28:25.911 Removing: /var/run/dpdk/spdk_pid92399 00:28:25.911 Removing: /var/run/dpdk/spdk_pid92471 00:28:25.911 Removing: /var/run/dpdk/spdk_pid92982 00:28:25.911 Removing: /var/run/dpdk/spdk_pid93033 00:28:26.170 Removing: /var/run/dpdk/spdk_pid93448 00:28:26.170 Removing: /var/run/dpdk/spdk_pid93784 00:28:26.170 Removing: /var/run/dpdk/spdk_pid94242 00:28:26.170 Removing: /var/run/dpdk/spdk_pid94357 00:28:26.170 Removing: /var/run/dpdk/spdk_pid94390 00:28:26.170 Removing: /var/run/dpdk/spdk_pid94445 00:28:26.170 Removing: /var/run/dpdk/spdk_pid94490 00:28:26.170 Removing: /var/run/dpdk/spdk_pid94543 00:28:26.170 Removing: /var/run/dpdk/spdk_pid94728 00:28:26.170 Removing: /var/run/dpdk/spdk_pid94791 00:28:26.170 Removing: /var/run/dpdk/spdk_pid94847 00:28:26.170 Removing: /var/run/dpdk/spdk_pid94937 00:28:26.170 Removing: /var/run/dpdk/spdk_pid94966 00:28:26.170 Removing: /var/run/dpdk/spdk_pid95022 00:28:26.170 Removing: /var/run/dpdk/spdk_pid95208 00:28:26.170 Removing: /var/run/dpdk/spdk_pid95483 00:28:26.170 Removing: /var/run/dpdk/spdk_pid95864 00:28:26.170 Removing: /var/run/dpdk/spdk_pid96249 00:28:26.170 Removing: /var/run/dpdk/spdk_pid96677 00:28:26.170 Removing: /var/run/dpdk/spdk_pid97103 00:28:26.170 Clean 00:28:26.170 12:10:24 -- common/autotest_common.sh@1447 -- # return 0 00:28:26.170 12:10:24 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:26.170 12:10:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.170 12:10:24 -- common/autotest_common.sh@10 -- # set +x 00:28:26.170 12:10:24 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:26.170 12:10:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:26.170 12:10:24 -- common/autotest_common.sh@10 -- # set +x 00:28:26.170 12:10:25 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:26.429 12:10:25 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:26.429 12:10:25 -- spdk/autotest.sh@389 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:26.429 12:10:25 -- spdk/autotest.sh@391 -- # hash lcov 00:28:26.429 12:10:25 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:26.429 12:10:25 -- spdk/autotest.sh@393 -- # hostname 00:28:26.429 12:10:25 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:26.429 geninfo: WARNING: invalid characters removed from testname! 00:28:48.359 12:10:44 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:48.927 12:10:47 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:51.461 12:10:49 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:53.366 12:10:51 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:55.271 12:10:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:57.169 12:10:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:59.702 12:10:58 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:59.702 12:10:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:59.702 12:10:58 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:59.702 12:10:58 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:59.702 12:10:58 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:59.702 12:10:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.702 12:10:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.702 12:10:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.702 12:10:58 -- paths/export.sh@5 -- $ export PATH 00:28:59.702 12:10:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:59.702 12:10:58 -- common/autobuild_common.sh@436 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:28:59.702 12:10:58 -- common/autobuild_common.sh@437 -- $ date +%s 00:28:59.702 12:10:58 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1721563858.XXXXXX 00:28:59.702 12:10:58 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1721563858.DcLsD9 00:28:59.702 12:10:58 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:28:59.702 12:10:58 -- common/autobuild_common.sh@443 -- $ '[' -n v23.11 ']' 00:28:59.702 12:10:58 -- common/autobuild_common.sh@444 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:28:59.702 12:10:58 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:28:59.702 12:10:58 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:28:59.702 12:10:58 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:28:59.702 12:10:58 -- common/autobuild_common.sh@453 -- $ get_config_params 00:28:59.702 12:10:58 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:28:59.702 12:10:58 -- common/autotest_common.sh@10 -- $ set +x 00:28:59.702 12:10:58 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-xnvme' 00:28:59.702 12:10:58 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:28:59.702 12:10:58 -- pm/common@17 -- $ local monitor 00:28:59.702 12:10:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:59.702 12:10:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:59.702 12:10:58 -- pm/common@25 -- $ sleep 1 00:28:59.702 12:10:58 -- pm/common@21 -- $ date +%s 00:28:59.702 12:10:58 -- pm/common@21 -- $ date +%s 00:28:59.702 12:10:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721563858 00:28:59.702 12:10:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721563858 00:28:59.702 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721563858_collect-vmstat.pm.log 00:28:59.702 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721563858_collect-cpu-load.pm.log 00:29:00.641 12:10:59 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:29:00.641 12:10:59 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:00.641 12:10:59 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:00.641 12:10:59 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:00.641 12:10:59 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:29:00.641 12:10:59 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:00.641 12:10:59 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:00.641 12:10:59 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:00.641 12:10:59 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:00.641 12:10:59 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:00.641 12:10:59 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:00.641 12:10:59 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:00.641 12:10:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:00.641 12:10:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:00.641 12:10:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:00.641 12:10:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:00.641 12:10:59 -- pm/common@44 -- $ pid=98826 00:29:00.641 12:10:59 -- pm/common@50 -- $ kill -TERM 98826 00:29:00.641 12:10:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:00.641 12:10:59 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:00.641 12:10:59 -- pm/common@44 -- $ pid=98828 00:29:00.641 12:10:59 -- pm/common@50 -- $ kill -TERM 98828 00:29:00.641 + [[ -n 6098 ]] 00:29:00.641 + sudo kill 6098 00:29:00.651 [Pipeline] } 00:29:00.671 [Pipeline] // timeout 00:29:00.677 [Pipeline] } 00:29:00.695 [Pipeline] // stage 00:29:00.701 [Pipeline] } 00:29:00.718 [Pipeline] // catchError 00:29:00.728 [Pipeline] stage 00:29:00.731 [Pipeline] { (Stop VM) 00:29:00.745 [Pipeline] sh 00:29:01.029 + vagrant halt 00:29:03.588 ==> default: Halting domain... 00:29:11.726 [Pipeline] sh 00:29:12.008 + vagrant destroy -f 00:29:14.542 ==> default: Removing domain... 00:29:14.812 [Pipeline] sh 00:29:15.092 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:15.100 [Pipeline] } 00:29:15.117 [Pipeline] // stage 00:29:15.121 [Pipeline] } 00:29:15.137 [Pipeline] // dir 00:29:15.141 [Pipeline] } 00:29:15.157 [Pipeline] // wrap 00:29:15.162 [Pipeline] } 00:29:15.176 [Pipeline] // catchError 00:29:15.184 [Pipeline] stage 00:29:15.185 [Pipeline] { (Epilogue) 00:29:15.199 [Pipeline] sh 00:29:15.489 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:20.791 [Pipeline] catchError 00:29:20.792 [Pipeline] { 00:29:20.804 [Pipeline] sh 00:29:21.087 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:21.656 Artifacts sizes are good 00:29:21.665 [Pipeline] } 00:29:21.681 [Pipeline] // catchError 00:29:21.692 [Pipeline] archiveArtifacts 00:29:21.700 Archiving artifacts 00:29:21.825 [Pipeline] cleanWs 00:29:21.836 [WS-CLEANUP] Deleting project workspace... 00:29:21.836 [WS-CLEANUP] Deferred wipeout is used... 00:29:21.841 [WS-CLEANUP] done 00:29:21.843 [Pipeline] } 00:29:21.859 [Pipeline] // stage 00:29:21.864 [Pipeline] } 00:29:21.879 [Pipeline] // node 00:29:21.883 [Pipeline] End of Pipeline 00:29:21.916 Finished: SUCCESS